Open- vs. closed-source jobs

One of the things I’ve been noticing in the classified ads — for Web developer/programmer etc jobs — is the virtual absence of any open source positions.

Yes, there are listings out there for some open source type products — often a job (web admin, say) will need Apache experience, there are rare Perl jobs (usually consulting gigs), and Linux always welcomed — but despite the so-called revolution of open source, the positions are not there.

For example, here’s what I’ve found for the following (in general):

  • MS ASP/ .Net: Quite a few positions
  • C/C++: Quite a few
  • Cold Fusion: Not many, but they do crop up from time to time
  • Perl: As mentioned, out there occassionally, but usually consultant gigs
  • Java/servlets: A
  • lot of positions. J2EE pops up as often as anything else

  • PHP: Nada. Nothing. Hardly ever
  • JSP: Not too often by itself, but usually as part of some other job — Java or CF/ASP whatever. Hybrid/multiple systems
  • Oracle: All the time; highest ranking database need out there
  • MS SQL Server: Frequently listed there, usually with either the ASP/.Net or C/C++ positions
  • mSQL/mySQL/PostgreSQL: Virtually never.

Also appearing quite a bit — especially with either JSP or CF jobs — are Dreamweaver and the general call for Unix or NT administration, as part of a Web development job.

OK — what does this all mean?

Look at the list — you can call Java “open source” sorta, but I don’t consider it such: Beyond need for users familiar with Apache or Linux (again, in a peripheral manner usually), there is no call for use of open source software (OSS).

Does this mean OSS is dead?

No.

It just means that:

  1. It doesn’t pay (literally!)
  2. New development — the meat-and-potatoes kind, not in-house use — is virtually divorced from OSS with Apache/Linux excepted.

Does this mean that OSS deployments have stopped?

No.

It just means that the OSS deployments that are happening are happening in one of two ways:

  1. In-house use — such as an intranet — built on OSS (cheap) by existing programmers, the ones hired for C++ development etc.
  2. Used by consulting companies etc that are already staffed — and probably in trouble — and this is what they recommend.

For an example of the former, consider my sojourn at SOS: We needed an intranet, I had to build it. I wanted it dynamic. All we had was a spare (shared — file server, essentially) Linux box with PHP and PostgreSQL (could have done mySQL, too, but admin wanted PostgreSQL. I’m glad). So it was built in that. And to the outside user, what’s the difference? None. Spits out HTML in the end; who cares what happens under the hood. AND — I was never hired for these (PHP/PostgreSQL) skills. I just learned them because those were the tools I was given. I bet a lot of intranets are built this way.

For an example of the latter, extend the intranet metaphor to the consulting company. They can deploy a fast, C H E A P site using PHP/mySQL/Perl whatever, and so that’s what they will propose (it’ll beat any “MS SQL Server/Advanced Server…” quote). And the folks who get the site? They don’t know the difference, either, just like the intranet users.

Now, you could make the arguement that there are still companies building sites for folks, but I don’t think that market is too big now. You can use (yech!) FrontPage or whatever and get a static Web site that will, in all fairness, work well for just about most small businesses.

Could it be built better dynamically? Probably, but the cost and the complexity factor is just not worth it to most small businesses.

Big businesses?

They have learned. They/friends/competitors have been burned. There must be a business case before deployment, and few business cases will — by defintion — support OSS: It doesn’t fit any business model. No cost, no support (news groups/community? how can a business plan define/support that?).

So they go with MS or Oracle/J2EE.

I’m not pointing fingers here — it’s really nobody’s fault. It’s just the way things are; get over it!

But it’s interesting.

And depressing.

Right now — as much as it was before the “OSS revolution” — the world of Web devlopment is split into two camps:

  • MS-centric: C#, C++, .Net, MS SQL Server, Advanced server and so on
  • Non MS-centric: J2EE, servlets, JSP, Oracle, Apache/Netscape (server use depends on business size; Netscape for bigger sites). Usually on Unix, but could be on NT

And yes yes yes, there is that third, hidden area of OSS deployment outlined above, but … it’s small and — with the exception of Apache, Linux and Perl — diminishing in relative use.

I also think the Cold Fusion is in real trouble, but that’s fodder for an entirely different entry (basically, with MX, Macromedia is taking the first step into Java, and it recently released “hooks” into several different J2EE app servers.) By the time the next release rolls around, it will be even more tightly integrated into Java, to the detriment of NT use.

Unless MS buys Macromedia — which is rumored, but I doubt — Cold Fusion is basically going to go away, be a pretty front end for scary Java. (Takeover rumors are based on MS wanting Flash, to undermine Java on desktop. Valid and of interest, but as XML and one component — SVG — come out, it doesn’t make that much sense, especially now that the CF rewrite [entire app] is in Java, which MS just loves….but that’s my take. I’m a business idiot).

The very interesting part to this return to two camps — to me — the coming revolution in Web services.

Yes, Web services have been overhyped, but I think they will happen (but maybe not necessarily as the experts have prognosticated).

OK, assume Web services are coming (they are).

Assume they will — slowly — become huge (they will).

Believe that as far as actually delivering products, OSS has done a much better job that the “two camps” (MS/non-MS), at least in general (I’m sure MSCE have some very good code samples etc.)

The “camps” are concerned — correctly — with getting it right before putting it out there (if I’m spending $10,000 on a .Net server license, it had better be correct…).

So there have been a lot of delays, a lot of false starts, a lot of vaporware.

OSS keeps cranking out, if nothing else, examples of how it should work (PHP & Perl use of SOAP on Amazon and Google and so on).

And it’s free, which is good for a technology one just wants to check into — no need to download the beta of this or that Web-services enabled server. Just get a SOAP module for Apache.

But once the big guys get it right — learning from the code put out there in OSS land, believe me — it will be a different story.

It will be back to the two camps again, each with slick Web-services capabilities.

Will the circle be unbroken???

CSS tip – .htaccess

Oh, the CSS problem I was having on littleghost.com: Turns out that the solution is to drop an .htaccess file in the Web root, specifying that MIME type.

While not elegant, this did solve the problem. Still can’t believe that CSS is not set as a global MIME type, but whatever.

The note to me from XO did not really spell out the issue; fortunately I know what an .htaccess file is and to put it in the root (rather than in the /css directory) to make it global. Again, what if I were a newbie?

2003 Prognostications

OK, this is a little late — should be done right before or right after the year switch, but whatever.

Here are some issues that I see as issues for this coming year, and what they will be. I have this information on no reliable authority….

  • Sun Microsystems: Boy, this is going to be a tough year for Sun. They are the Apple of Unices — make their own boxes, which contain their own chips, which run their own OS… (Yes, Motorola makes the PowerPC for Apple, I know…). While Sun boxes are still the heavy hitters — the best bang out there — they are not the best bang for your buck. IBM, with Linux, is really eroding the “need Sun for heavy lifting” mentality. And Compaq (oops…H/P [Hewlett Packard] ) is desperate for UNIX revenue, as well, and they are doing a Linux play, too. Nowadays MS isn’t even on Sun’s radar screen — Linux is. Sun has to do something, and I think that “something” is Java: This is the year that Sun will figure out (for better or worse) just what to do with Java. Essentially, how to make money off it. Because the server business is looking grim for them.
  • Apple: Speaking of Apple, it seems like every year is a good time to trot out the “Apple will finally die this year” type of talk. And every year, this death is greatly exaggerated. I don’t know, I think this will be a relatively uneventful year for Apple (MacExpo opens tomorrow; I could be proved way wrong very quickly). I think they will keep goosing existing products, stabilizing and enhancing them. I don’t see any large inroads being made by the company. While they are capable of making a killer tablet PC, there is no real demand for these products beyond the whiz-bang effect. And the tablets that have thus far come out are more expensive than notebooks; Apple, with it’s tradition of more expensive than PCs for same type of product (ignore the quality factor), would be foolish to follow this lead. But what do I know?
  • Microsoft: It will continue to take flack for continuing security lapses, Passport will somehow develop/display a serious issue that needs attention, it will delay Longhorn and Yukon again (actually, a good thing — release when ready, not to fit a schedule), it will continue to take fire for questionable business practices, it will actually listen to the customer and revamp some of its licensing agreements (actually listening to the bottom line), it will continue to make gobs of money…but at a slightly slackened pace. Also: By year’s end, the .Net Web services architecture will be still pretty much in the prototype stage. No common use (I could be wrong on this one) [Edit: Just read on news.com a few minutes after I wrote this that MSN messenger has gone dark for many — and this has a .Net backend component. Ha!]
  • Privacy: While a couple of recent court rulings hold some hope for the notion of personal freedom and privacy, as long as John Ashcroft is Attorney General, things will be grim for the freedom types. Why no outcry over this continual trampling of civil rights, from ignoring the Freedom of Information Act when it’s convenient to detaining Americans (“…home of the free….”) without any due process? Two numbers: 9 & 11. Yeah. But he keeps pushing things, and I think there will be a backlash at some point. This year? I don’t think so, and the following year is an election year, so he’ll probably tread more softly then. * sigh *
  • Computers: As with the last couple of years, sales will remain soft, and there really won’t be much incentive to get a new computer. Sure, a tad faster, larger hard drive…but minor differences, really. For example, my 1Ghz machine has an 80G drive. Plenty fast enough, and I probably have about 50G free. Would I like a newer, faster computer? Sure! Do I need a newer, faster computer? Hell no. And I think a lot of folks are in this boat. And with the economy in the dumpster, why shell out still substantial bucks (for a decent system) for something that is not a need?
  • Speaking of the economy… How should I know? I had to guess, I would say that it won’t get worse, but if there is an uptick this year, it will be moderate. And that damn spectre of war hangs over all like the Sword of Damecles(sp?).
  • Wireless: By this I mean Wi-Fi, not cell phones/blackberries etc. This will be huge this year, and the introduction of the 802.11g standand will effectively end the short appearance of 802.11a (fast, but G is compatible with the slow but omnipresent 802.11b standard; same frequency, 2.4 Ghz). More and more business and — especially — homes will make this an almost de-facto standard for PCs and other such stuff. What of Bluetooth? Damn good question. I still can’t believe it hasn’t make inroads. Hell, I have a spaghetti bowl of wires around my computer table, as I’m sure we all do. Anyway… Wi-Fi will help tremendously with home and business networking, and the 54Mps speed means that even the boxes I have hard wired (both to save buying a Wi-Fi card and because the T-Base100 is faster than 11Mps from 802.11b) might eventually become unteathered. NOTE: Wi-Fi vendors are going to have to do something about security, the WEP is a joke (but better than nothing). This may be an issue in the coming year with wider deployments. Think the government will allow free use of more — better — bandlwidths in the coming year? This is a definite possibility, but may not be addressed (especially if there is more of this stupid war stuff).

CSS and Mime Types

Well, I found out my HTML4.01 strict & CSS problem at littleghost.com is not me.

I finally got to the point of understanding the problem — the MIME type for CSS was not properly set. This only creates issues when one uses the strict doctype and attempts to import/link a CSS file. Odd.

And to make it worse, it’s only an issue (display issue) for Netscape. Works fine in IE — but the style sheet does fail the W3C validator, so that’s no good. Should pass (when the same CSS is parked somewhere else [different domain] and called from page on littleghost.com, all is well).

I wrote to Concentric, and they assured me that the type was set. It appears they were just looking at the pages displaying in a browers, and I’m all but certain that the browser was IE (why wouldn’t it be?).

So I set up some example pages for them to look at, and they finally got it. It has something to do with the configuration on my domain. They tested a test html/css file set they had; worked fine and validated at W3C in some other location. They move it into my domain, and they begin to see what I saw.

They are escalating the issue.

While it’s good they finally got it, what if I was a newbie? There first response was “yes it works; the mime type is set”.

How would I have been able to tell them they were wrong? I would have spent weeks coding/tweaking to make it work…and never understanding why it didn’t.

I’m actually pretty good at this stuff, but I had to prove to myself (so I could prove to them) that it was them, not me before they’d try to address the issue. If I didn’t have this Linux box here (so I could kill/add MIME types) I might have been screwed, even knowing what I do.

I felt I needed to do this so I could tell them (as I did) that I have this working in three different environments, two NT and one Linux — but same code fails at Concentric. And what if I didn’t have another domain to park the style sheet on so I could show them that a call to Concentric results in failure, call — from same code — to get the CSS from another domain was successful.

That’s one of the reasons I like to have two domains, but it’s always nice when they are configured correctly. I’m just learning this administration stuff myself, so I use the actual domain hosts I have as examples of what should be done. So I can see if what I’ve done here will work in the “real world.”

Ah well, we’ll see if they get back to me on this one. I’ve been at Concentric (ok, “XO”) for five years now, and don’t really want to move the domain unless I have to. Just too much of a hassle.

About databases…

OK, I was thinking about databases.

What am I thinking about now?

I’ve been coding my brains out lately, but in a very helter-skelter way that (occasionally) dovetails nicely.

The following is a list of what I’ve been working on lately:

  • Littleghost.com: Revamping my littleghost.com Web site for the first time since launch. See earlier entry.
  • HTML 4.01/XHTML: Now that Netscrape is finally standards complient, it’s time to really knuckle down and figure out how to use the tools the W3C has given us over the years that we just could not use effectively. This is a large part of the Littleghost.com redesign. (Note: A recent survey [by who?] said that IE 5-6 have 95% of the market. Fortunately, Netscape’s offerings were strongest with their new offerings, NOT v4.x).
  • Perl & PHP: For various reasons for various tasks, I have been doing a lot of Perl and PHP. I like both a great deal. I have been working with both languages for about 2-3 years, but never really got a lot of time to use them. I’m making time now. I wish my providers supported PHP (one does, but have to code them with a she-bang like Perl scripts and put in the CGI-BIN, which makes them fairly non-portable)
  • Web services: As mentioned in an earlier entry, this is something that I got into because Amazon and Google are opening their APIs to a degree, and use of XML tools make both sites accessible. No linking; no frame-out. Import the raw data and knock yourself out…
  • Javascript/DHTML/CSS: As part of my “standards” search/pursuit, I’ve been doing a lot of this, and making sure it works in IE and Netscape. For the last year or so I’ve been designing for IE solely, and there are still some quirks required to make anything the same in Netscape, even the new versions. So — OK — the true “standards-complient” browsers are not here yet, but they are getting damn close. Thank god the damn LAYER tag is gone….

I’ve been doing a little XML, some Cold Fusion, some stored procs and messing with three different databases (mySQL and Postgres on Linux; MS SQL Server on Win2000), as well. Probably not as much as I should, but there is only so much time.

=======================

One other thing I have been getting into lately is shell scripting. I finally found a book (PDF, on the Web, free) on BASH scripting (I use Bash on my Linux box; to be honest I don’t know if I have the Korn or Bourne shell on there. Doesn’t look like it).

Shell scripts are a pain in the ass, but excellent coding practice. They are difficult because they are so precise. With HTML, you can get away with almost anything (no close TR? the browser understands). With Cold Fusion, you get away with a lot (not case sensative, loosely typed etc). With Perl, it’ll slap you for case, but other matters are handled transparently (variable $num not exist? Then “$myNum = $num + 7” will equal 7. No error).

Shell scripts require all sorts of rule-following, the most difficult — to me — is the space issue:


I like writing: $c = $b + 4;

Shell scripts require no spaces: $c=$b+4;

Yeah, same thing, but …. just not my usual coding practice.

But good — you HAVE to be precise with shell scripts, which is a good thing. (However painful)

In a case of my “learnings” dovetailing, what I’m doing with the shell scripts is writing scripts to back up important files/directories on my Linux box to the Win2000 box and vice versa. This required the following tools/skills:

  • Shell scripts to do all the work, which includes FTP get/puts and so on
  • Installation/administration of an FTP server on my Win2000 box (freeware)
  • Installing the command line tool for WinZip, so I could write batch files to zip up selected directories
  • Scheduling — on the Win2000 box — the Zip batch files
  • Scheduling — via CRON — the jobs on the Linux box (all jobs run off the Linux box except the Win2000 directory zip, which are batch jobs. Linux is much better for scheduling and scripting [have tar, gzip, permission handlers etc all at your fingertips] )

It’s been an eye-opener.

I currently have eight CRON jobs running every night; before the crons run, I have two scheduled batch jobs on the Win2000 box zip things up.

Pretty cool.

And the best part is that I wrote these a month or so ago, and I just let them go. And they keep working. (Yes, I do check that they ran, and occasionally try to “restore” from a backup: never failed yet).

This was a lot of work — simply because a lot wasn’t in place (FTP server etc), but because I do have at least passing familiarity with the crontab, scheduler and so on, it was pretty straight-forward. Lot of work; lot of time — but no “deal breaker” dead ends. Just busy work, to some extent. I would figure I’d need this or that; I’d do it. No biggee.

Sometimes being an inquisitive geek pays off.

mySql

I wrote — over a year ago — that I was “thinking about databases” and all that.

That train of though turned into a guest editorial on the subject of open source databases vs. commercial products.

It was interesting to write — made me think — and, of course, the response from readers was the really interesting part.

Sure, I got flamed, called an idiot and all that, but there was a lot of knowledge and experience behind the responses in many cases.

Basically, the article said “Open source solutions are in many areas comparable or better than commercial products, but this is not true in the case of OSS databases. Why no outcry (or am I missing the outcry)?”

And — basically — the response from readers was that what is out there is fine; the options offered by commercial products were just not needed or could not be cost-justified.

Wow. Blew me away.

Because the most widely accepted/deployed OSS database — mySQL — is really a piece of crap. It doesn’t pass ACID tests, it’s filled with proprietary (instead of ANSI-complient) SQL (such as the “concat” operator! Scary…), and does not support a lot of the things that make it a database.

While there were dissents — and those who said that, yeah, Oracle is good, but I’ll never use the 10 million configuration options offered (fair) — the general response was that mySQL is just fine for the job.

Basically, people are using mySQL — and other OSS databases, such as Postgres, SAP etc. — much like flat files. Just a big table or two; maybe joining the two in some cases. Very denormalized. The advantage to using a database instead of a flat file even in this case, of course, is that one doesn’t have to write the logic to extract/order/limit the data pulled from the “data store” — SQL is used.

And then you can extend it later — add another table etc — very easily. And — importantly — without changing/adding any business logic.

And that is a good thing.

But it was just a bit scary to me: I had thought that the OSS crowd, in general, was more sophisticated about databases than that. I got notes and there were posts from people who had been doing this for years and saying, basically, that they don’t use primary keys and so on.

Again, the “relational flat file” syndrome.

While I agree that many projects do not need the weight of an Oracle or MS SQL installation, but one should still adhere to good database design and usage no matter the product used. It just seems odd — and surprising — to me that the users of OSS software don’t seem to put a lot of stock in these “best practices.”

On the other hand, I’m judging from the people who posted. And those who posted — or wrote — are probably always going to be those who disagree, not the ones nodding their heads and thinking “yep….”.

All in all, an interesting project (the article/responses) overall.

Where’d I go?

Damn.

Can’t believe it’s been a year since I posted here — actually, almost 15 months.

I do remember reading something somewhere recently (/.?) that mentioned an article that correlates the rise in blogging with the rise of unemployment among the blogger types — techies.

Makes sense, and sort of works here.

But whatever. Onward.

========================

I’ve finally gotten around to redesigning/recoding the littleghost.com site.

When I got the domain back in July 1997, I spent a weekend putting together a look and feel and all that….and pretty much have not changed it since.

Sure, I added sections here and there over the last five years, but I never really touched the GUI. Added a touch of a style sheet and so on, but nothing remarkable.

So I have begun the process of recoding the site. I’m trying to accomplish the following:

  • Slowly bring the look and feel of the separate sections together
  • The look and feel will be HTML 4.01 compliant and pass the W3C tests for HTML and CSS. Style-sheet driven site
  • The coding should be XHTML complient, as well. This will take a bit more work, replacing tables and BR tags and so on
  • Make it look virtually the same in IE6 and NS7 — those are the only browsers I’m really worried about. (Note: The site will not render well in NS4.x, because of that browser’s poor CSS support.)

As always, this site is really for experimentation and so on — it’s not supposed to be a real site that people really want to visit. For all the servers and so on I have locally, having them remotely is different.

For example, there is some bug at Concentric that does not allow the inclusion of (or, at least, acknowledgement of) a style sheet if the doc type is html 4.01 strict. Replace with HTML 4 transitional, and all is fine. Weird. I have to figure out just what is happening there.

So, currently, I have the style sheet called from geistlinger.com, and it’s fine. Go figure. Works fine locally on NT (Win2000 pro) and Linux (Apache). So I dunno. More things to check into! Oh boy….

Conversion is going well so far; I’m glad I waited until I had a little more experience in HTML 4.01 coding before converting — it’s not really as straight-forward as you might think, especially when you approach it (like I do) with an HTML 3 & HTML 3+ mindset. Still hard to think of DIVs and not TABLEs, how to align, messing with the inheritance issues of CSS styles and so on.

It’s been a nice learning experience.

So far, I’ve converted over the main page, the postcard section (for the most part — large CGI rewrite necessary, as well) and the Term Glossary (need to import a new version of this from my Linux box).

I have not decided whether or not to change this area — Blog This! — to the new format. Would be a good exercise, but the first issue is functionality, and I don’t want to mess this up just for uniformity in looks. The looks will get there; I have to make certain the functionality is not affected.

==============

Other than that, I’ve been doing a lot of coding, from Perl through PHP to Cold Fusion. Database work has been relatively light recently, just a stored proc here and there, some tweaks as new sections need it and so on.

One thing I did spend several days on is using Google’s open API as a Web service. This rocks.

Basically, I can make calls to Google’s database and pull back the results to my (Linux) machine and massage the results as I see fit. It’s done via a SOAP wrapper and a local WSDL style sheet (provided by Google).

We’re talking a Web service. And it works. How cool is that?

I’d love to publish it out here on littleghost.com, but the necessary SOAP wrapper (I wrote the program in Perl) is not available on either of my domains — so I can only run it on my Linux box. Still cool….

Amazon has a similar program going; I have to try to see if I can get that to work. Maybe this time I’ll do it in PHP (need the PHP SOAP wrapper for my Linux box, however…).

Lot’s to learn out there, and the industry leaders in services are turning out to be companies like Google and Amazon, and not the players like IBM, Sun, M$ and so on. Interesting. While the “real” players (IBM etc…) will catch up quickly, I think it’s interesting that the pure players — the “all Web” players (Google etc…) are really making a difference, and making the promise of Web services (which is wildly overhyped currently) a reality for the average Joe Developer to see.

You go guys….

Digging into databases

I’ve been thinking a lot about databases recently.

One of the reasons for this focus is my increasing focus on databases/dynamic sites, and my relatively recent exposure to databases.

Sure, I’ve done access and worked with sites that have run big databases, but until the last couple years I never really performed an ODBC connection.

Now I do it with alacrity and increasing frequency.

So I think more about it.

In a way, I have a good point of view: I have a solid background in Web design and, to a degree, application architecture. And while my database exposure has been relatively recent, I’m not a stodgy old man on this front: No, I’m not an expert in databases. But I sure wish I was. They change everything for a Web developer.

With all the tools I can bring to the table, I’m able to see the flaws and strengths of most databases — or at least the database/application nexus.

What I’m seeing, overall, is the following:

  • Badly designed databases (I’m as guilty of this as the next, but my mistakes stay on my home computer, they don’t run actual businesses).
  • Bad databases for open source (mySQL & postgresSQL, though both have their good points, as well: mySQL — installed base; postgresSQL — great database overall but for lack of tools).
  • Stupid, awkward database connections by programming languages (Perl, PHP, ASP for example). I’m spoiled: Even if you hate Cold Fusion — and many programmers do — you cannot deny that it is the easiest to connect to a database. Set up a DSN in an admin tool, and then to run a query and get results have to do all of this:


    select * from table


    How hard was that?

  • Uncomfortable database/application solutions. Use a Cold Fusion example again: To me, the best solution here is to run Cold Fusion on Linux (faster than on NT; way cheaper; more reliable) and MS SQL Server on NT/2000. Virtually no one does this. The platform wars are still hurting us, although things are getting better.

What tech will stand the test of time?

OK, we can do this the long way or the short way.

Right now, I feel like the short way. I’ve discussed much of what I will give a “cash” or “trash” rating to before; now is not the time to go into detail for this. This is spozed to be a quick overview.

The question is, what technologies will survive in a meaningful way (yes XXX database is the darling of 5 billion sites. With total incomes of US$5.00 total. Great database, “trash” on this list). The question comes down to a business decision; can it help us fiscally? Is it something that one should spend time learning (Java, yes. C#? Unclear. FORTRAN. Probably not…)? CASH. If not, TRASH, however cool. One has to establish a fulcrum (however bogus) and work off this basis. Sorry.

  • Open source software in general — TRASH: With the exception of Linux, will be marginalized even more than it is today. Big players — i.e. Microsoft, IBM, Oracle and so on will come up with solutions that match or excede OSS, and they have sales forces and marketing money….

    Application environments

  • PHP — TRASH: It will endure and grow, but unless something fundamentally changes with the Web, it will never be the powerhouse ASP or JSP are. I actually have trouble with this one, because I both like PHP, and I do see some sites moving from JSP to PHP. Interesting; should be the other way around…or should it? That’s why I have trouble. PHP has overcome the intitial “cool but so what?” worries and is now a solid language (v4 big leap), but it still suffers from one huge liability: Database issues. 1) Like ASPs, I hate the mechanism to hook to databases in PHP (yeah, Perl is even worse); 2) One goes with PHP because one is running *NIX. What databases? MySQL is No. 1 open source, but really not a mature database. PostgresSQL is much much better, but no tools, no installed base and bad business decisions (Great Brigde be gone). Leaves Oracle, which rocks, but costs a bundle. Which almost defeats the point of using free software for developing. My guess is that there will be a lot of small sites — much like Cold Fusion sites — by small companies or as intranets using PHP/mySQL. Nothing to speak of enterprise, however. Where I might be missing the boat is seeing that PHP is slowing becoming the tool of choice for Perl coders who need a more “HTML friendly” language. PHP combines HTML hooks and uses a lot of Perl (and Java) functions/regular expressions and so on, so this could well be true. I’m just making this up now, but a solid case could probably be made for this.
  • Cold Fusion — CASH: Not to a large degree, but as Web sites get more and more dynamic — a year or so ago a database-drive site was the exeception, now it’s the norm, and that is what CF excels at. I expected better things from v5.x; right now looks more like a minor point update, not a full integer upgrade. Performance boosts and stuff that PHP had before. It will be marginallized vs. ASP and JSP, but will still power many sites IF it keeps growing (the ease of database access and coding is why it rocks; if this does not continue to improve as other technologies do, it’s doomed.) Serious geeks will always consider CF a tinker-toy language, which is true — and that’s its selling point. Don’t need a degree or team of technologists to create a dynamic intranet site. One drawback to CF is the same issue with PHP: Database issues. Yes, CF runs against all databases, but most deploy it against MS SQL Server. Which is great, actually. But, as mentioned above, open source databases largely suck. So users have the choice of CF on NT with SQL Server, or CF on Linux (say) against a MS SQL Server database on NT. The latter has better performance, higher reliability, lower cost. And the former is almost ALWAYS selected, because it’s easier. One OS is good; IIS comes bundled with NT… Sigh.
  • ASP — CASH (big time): Face it — it’s a MS product, it is blazingly fast because it is all native (NT/IIS), nice COM and DCOM hooks. Lot to say for it. I don’t like it, but I’m not a Microserf. Powers a lot of sites, and will continue to do such. It’s partly a perception problem — IT managers will have a tough time selling, say, PHP or Cold Fusion to mangement — management has no clue (“What’s a pee-ache-pee?”). Can’t go wrong saying “all Microsoft site,” won’t have to worry about compatability, one sys admin can take care of it all and … yes, Microsoft will be in business in 10 years. Will Allaire — oops! — Macromedia? Will PHP even exist in 10 years?
  • JSP/JHTML — CASH: This is a tough one, because — as I’ve mentioned — I’ve see some indications of people moving from JSP to PHP, which is contrary to what I’d expect. That said, I still think it will grow, because data-driven sites are mandatory now. JSP (or JHTML/servlets) is UNIX’s answer to ASP. Those with big Sun boxes don’t have too many choices — PHP, Cold Fusion, Perl/Python or some Java solution. The first two are really not scalable as currently offered (I doubt Cold Fusion will even be really Enterprise ready — reads, yes. Writes, no). Right now there seems to be a split between a home-brewed Perl/Python package (Mason and mod-perl has helped this) and Java solutions for the big companies on UNIX. I see Java rising to the top shortly, if for no other reason than the rapid climb of middleware such as Websphere and Weblogic — Java tools/solutions for UNIX. This part of the market will get even bigger in the futures, especially the higher up one goes on the Fortune XXXX list. (Ironic note: Microsoft is always dinged by detractors for closed, proprietary products. While true to a degree, the MS platform — simply because of its shear ubiquity and relative [to Sun/Oracle, say] low cost — offers more development options. MS will run Oracle, mySQL and MS databases. Will run Cold Fusion, PHP, JSP, app servers and so on. While performance might not be as good — or security as high — there are options.)
  • PERL — CASH: Larry Wall once said Perl was “the duct tape that holds the Internet together” or something like that. This becomes less and less true every day, but it still is extremely valid. Perl is everywhere, even if not a development language (I personally think PHP will be the new Perl for Web dev). It’s doing data transformations, image processing, imports, exports and so on. It will never go away. It’s that good. It can never hurt to know Perl, even in an NT environment. Yes, gone are the days where a dynamic Web site was a Perl CGI opening a flat file and returning results, but Perl still has a very big place in Web developement, even if it is not front-end developement. Please don’t build your site in Perl; please learn Perl.

    Databases

  • MS SQL Server — CASH, bigtime: If you buy into the MS message, you will be running SQL Server. There will be some folks who will — for some personal reasons — run Oracle on NT, and there will be those who — for cost savings — run mySQL (or even Access!!!!) on NT as well, but for any serious Web development, you run SQL server if you run MS stuff. And this is personal, but I think SQL Server is the single best MS product. Really. And v7 made enormous strides over v6.5, v2000 is supposed to be better still, even if not the quantum leap that 0.5 version leap indicated. If MS ever ported SQL Server to Linux — which I seriously doubt — it would instantly become an enormous hit. I still think there will be a lot of sites running either PHP or Cold Fusion on Linux against a SQL Server database. This is a great combo — the best of both worlds. It won’t happen as much as it “should,” simply because most places are not forward-thinking enough to run in two environments. And to be fair, it is more daunting to do so.
  • Oracle — CASH, bigtime: Oracle is the No. 1 database for operations where money is no object. That’s about how much they charge, but they have gotten better with the “i” products. While most people don’t need the 2 gazillion tuning options Oracle offers, some do. While MS SQL Server 2000 is making some serious inroad to the true enterprise market (think Amazon.com, Dell.com, Ebay.com), Oracle is still the standard bearer. Also, if you are running on UNIX — which most of the enterprise market did prior to a year or two ago, Oracle was basically the only real choice, unless you were an IBM house.
  • DB2 — CASH: I really don’t know enough about this to say much, but it’s an IBM product. And IBM has really gotten it’s Internet act together over the past couple of years. DB2 is their database of choice — because it’s their database, but they do support other databases in their WebSphere development product. My guess is that most use Oracle, unless they are a “totally IBM” shop. But it’s IBM. They will survive. DB2 will survive at least in the short run just for this reason, even if it is a bad database — which I don’t know one way or another.
  • Sybase/Informix — CASH: Informix is tough, as it has been purchased by IBM. What does that mean? Unclear. Replace/augment DB2? Wither away and get those users on DB2? I dunno. My guess is that both will maintain a presence, mainly at enterprise-level companies, but they will not be the real movers and shakers. Like COBOL, there will always be a need for it. Yet each year it will shrink, and few – if any – brand-new installations (new installation in company without existing products running the DB) will appear. I really don’t know much about these guys. I could be way off base on this one.
  • mySQL — TRASH: This is not a good database, though recent attempts have made it better. It does have the installed base crown for OSS databases, but … so? I have been reading more and more articles about how to hook PHP (the basic dev tool for mySQL, along with Perl I guess) to MS SQL Server. Which would have been unthinkable a year or two ago. OSS with MS?! Are you mad?! Times change; more sophisticated sites need the support only a database like MS SQL Server can give it. These articles are on OSS sites, too. And the articles’ message threads have more “Help! I’ve having trouble doing this!” than “MS sucks!”. Draw your own conclusions. mySQL will probably never go away; it will become the Access database of the OSS world. But it’s not going to reach any higher than Access has already stretched, which ain’t saying much.
  • mSQL — TRASH: OSS database decisions often come down to mSQL and mySQL. The latter always wins (people don’t know about PostgresSQL). mSQL? Stick a fork in it.
  • PostgresSQL — TRASH: This one is tough for only one reason: Because it is the database solution now offered by Red Hat to help the company give businesses an alternative to Oracle. But Great Bridge — the company that employed many of the Postgres founders and coders and tried to sell a commercial version of the product — is gone. The pinheads refused to partner with Red Hat. So Red Hat said, “OK!”. And suddenly Great Bridge is … unnecessary. Ouch. Back to the point — because of Red Hat Postgres might endure, but I don’t know. It’s a great, stable database, but it came too late. The world was already carved up between “I want an open source solution” (mySQL wins) and “I need a REAL database” (and people go with either MS SQL Server or Oracle, depending on need). There is no need for a REAL database that is also OSS, unfortunately. I like Postgres; I run it. It will go away slowly….
  • Access — CASH: No, this database does not really belong here, but let’s get real: Most companies run on a Windows network and have Windows desktops. All can run Access. Access is easy to hook up to Cold Fusion (good intranet dev environment), can also — with more work — hook it up for PHP. Access if essentially free (comes with all business machines, basically) and is a very easy database to use. So it will be. NOTE: There were a number of sites out there — such as Chrome Data — that actually ran a complex Web site off Access. Those days are pretty much over — users moving to SQL Server — but it still can be done, and will be to a certain respect. Take my site, for example. I get three hits a year. Access database support is $10/mo; SQL Server is $20. Gee, which will I pick????

    Webservers

  • Apache — CASH: This is another odd OSS “cash.” While Apache does power the majority of the Web sites out there, the percentage is shrinking (IIS is replacing it, for the most part), the sites that it does power are small sites — small business sites, hosting companies that run only Linux and will be out of business in five years (not because of Linux, but because of mind-set) and such. Virtually all large, highly transactional sites run Netscape. Those that don’t run IIS (such as dell.com). Apache is true OSS and no one really makes money off it except O’Reilly publishers. But it’s a great server — complex (but not for UNIX heads), flexible, fast and secure. One concern is why we still don’t have an Apache v2.x. Still in v1.x. Yes, it’s open source, and no one gets paid for this (some company is trying; I can’t recall it’s name — may be outta bizness). Apache is also available for NT, so lots of sysadmins — UNIX dorks familiar with Apache — will use that instead of the notoriously insecure IIS. For the most part, this will be small businesses and intranets, but there will always be a market for Apache on NT, as well. Just won’t pay.
  • Netscape (iPlanet) — CASH: Still the standard-bearer for enterprise-level servers with high transactions, even the Sun/Netcape iPlanet fiasco can’t dethrone it. Netscape rules in the UNIX world and has some presence — but not as much — in the NT world. One thing many users don’t realize is that the iPlanet FastTrack server is free and a good approximation of the very high priced Enterprise edition. If you run Enterprise in the workplace, run FastTrack on your laptop or whatever. Great server; easy to administer. Better than Apache in this respect, simply because I’m a fan of running the same software all over so things can translate well. Learn how to set the primary document directory on FastTrack, you’ll know how to do it on the Enterprise product.
  • IIS– CASH: Wow. Has IIS gotten beaten up lately. The Gartner Group even suggests replacing it. People won’t. It comes with NT, integrates well, has same interface as other NT tools/products. Until hit, people will not abandon it. Also — if you’re running ASP — you’re fucked. That it pretty much your only choice. (That’s how MS gets you…). If you don’t run ASP, I don’t see any reason to run IIS except that it comes with NT. So what? Apache is free and faster for non-ASP things. But people won’t do that, I know…that’s why it’s a CASH.
  • All other servers — TRASH: Yes, some have their place and all that, but these three run 97% of the Web. Any questions?

These are currently the three big pieces of Web development now; it will be interesting to see how wrong I am in the short and long run.

I deliberately ignored (with some references) middle-tier products, as well as XML and other transformation languages. I wanted to focus on the basics as they are today.

Yesterday there was static code and Web server.

Today there is dynamic code, database (for that dynamic content) and the Web server.

Tomorrow?

Check back…..

9/11 – first, thin thoughts

This is the beginning of the first weekend following a week that will, like Pearl Harbor, live in infamy.

Yes, it is the Saturday following the Tuesday bombing — there is no other single word for it — of New York’s World Trade Towers, the Pentagon and a failed attempt to steer another hijacked plane into another Washington, D.C. site.

This is a tech blog, yet let’s face it: No one — on TV, on stage, on the Web — can write/talk/think about much except this. Half of Slashdot was the bombings for day — and that’s geek central. Says something.

I personally escaped the personnel impact of the terrorist attacks — I did not lose anyone or my own life in these acts.

I did lose a job, however.

I was supposed to receive a job offer on about Tuesday from a large corporation. Due to the bombings and the uncertainty of what lies ahead, this corporation established a hiring freeze. So I was frozen out.

As I explained to the corporation, I don’t like it, but I certainly fully understand. Things have changed, and no one is quite certain of what way they have changed.

And I have this totally in perspective: Yes, I lost a job opportunity. Thousands lost their lives; many thousands more will be forever impacted by this day in ways that go WAY beyond my piddly job opportunity.

‘Nuff said.


One of the debates going on right now — actually, it all began shortly after the bombings — was that this is either:

  1. The first true Internet war (people say Kosovo, but the columnists scoff)
  2. The day the Internet failed

Realistically, it’s probably a little bit of both.

To a large degree, I think the Web did fail in so many ways for people, especially on Tuesday, as the bombing were occurring.

I was online early Tuesday morning, checking/answering e-mail, hitting this and that site (mainly working on a freelance project). I finally hit CNN just after the first plane hit. I remember wondering — as the page was taking forever to load — if anything was up, because CNN is usually snappy at this time.

I saw the picture, wrote an e-mail to Romy and told her I was going to watch TV.

Which I did, for 15 straight hours. I didn’t even shower until about 3pm. I just watched.

I would occassionally check CNN — and saw it go into the worst crisis mode I have ever seen: Logo and HTML text. That’s it.

Obviously, they were getting pounded.

I couldn’t even reach msnbc.com or abcnews.com. I’m sure AOL was nailed.

The Net failed a lot of people.

In a Wired.com article shortly afterwards, author Leander Kahney said that the Web didn’t fail, you just had to know where to look.

Uh, my Mom doesn’t know about Slashdot and the mirrors many folks scrambled to put up to help get the word out.

The Net failed — it did not operate in the fashion people expected. Most could not get information. Yes, the geeks could. My mom couldn’t.

It failed them.

On the other hand, there was a lot of great first-hand information out there — yes, if you knew where to look — and many people took the time and bandwidth to get the info out to people. It was great in that respect.

To me, the coolest thing that happened on the Web was the way everyone did band together to help. Some attempts were misguided to a degree — I have seen at least a half-dozen sites offering to be the clearinghouse for missing persons information (this should be consolidated in one place so people, again, don’t “have to know where to look”), but for the most part excellent efforts. Amazon, in particular, should be commended. For this entire week, the top page of Amazon — that e-commerce jugganaut — has looked like the image on the right. Totally devoted to getting donations — which they will process at their expense — to the Red Cross. To date, $5.4 million has been raised through this one effort.

That rocks.

Let’s be honest: Amazon will reap a lot of positive publicity for it, but I don’t think that will make up the lost income from people who would have gone to the site and made an impulse buy. But even if they DO end up making money off this effort, what’s the down side? Money was sent to the Red Cross, people had a place — a highly visible place — to go and help out in some way. I don’t see the downside.

What a great concept. This is the Web at its best.