Googleperplexed

Or – let the Google-bashing begin!

As noted is some earlier entry, Google is the crosshairs of the technorati now, simply by virtue of its success.

Yes: In America, kicking someone when they are down is bad manners, but kicking someone when they are up, hey, that’s just American!

The latest GoogleFlak is the analysis Google’s of SafeSearch option by Harvard’s Ben Edelman.

Edelman’s conclusion: SafeSearch blocks thousands of innocuous sites (example: “Hardcore Visual Basic Programming”).

My reactions:

  1. GASP! I’m shocked! Stunned! Amazed!
  2. Nice catch – this might push Google to better with this
  3. WHO CARES/SO WHAT!?

In order:

GASP! I’m shocked! Stunned! Amazed!

Why should Google be any different than the other filters out there? There are companies whose entire purpose is to correctly filter out the naughty sites, and they unfailingly block sites that are useful (CDC, breast-cancer sites, and so on). Especially in an automated fashion, it’s tough.

That’s one of the reasons that the ACLU and librarians don’t want to have to install filters on library computers: So much good stuff will be blocked out, as well.

While Google is certainly positioned to do a better job than the net filters, I never really imagined that Google would do that much better (see No. 3 for more on this), at least at first.

Nice catch – this might push Google to better with this

It’s always good to have watchdogs out there – for causes I believe in, those I don’t, those I don’t care about. This group examination is a good “checks and balances” type system. Good for Ben.

It’s doubtful that any harm will come from this analysis/publicity. Yes, Google may have to work a little harder, but that will earn them some respect etc. We can all win. Good for Ben, again.

WHO CARES/SO WHAT!?

I read so much hand-wringing reaction over this “discovery,” mainly on blogs but in tech columns, as well; I just don’t understand the fuss.

Let’s look at a few facts and observations:

  • Google never promised that the SafeSearch filter was 100 percent accurate.
  • What is 100 percent accurate? I think access to information about contraceptive choices should be allowed through; you may think this is unsuitable for your child.
  • Google’s response to this study is that they try to err on the side of caution: Whether or not this is true, it seems to be a good policy. Kind of like the “innocent until proven guilty” concept. If in doubt, suppress. AND NOTE that this suppression is not
  • censorship. The user turned on the filter, and can always turn it off and resubmit.

  • You don’t have to use the filter – Unlike the debate over library filters, Google can be used in two ways: Filtered and unfiltered. Feel like you’re missing things? Turn filter off (this is the default). Getting too many naughty links for your taste? Turn the filter on. Your choice.
  • Google is not in the business of this type of filtering – the accuracy of their filter is probably not as high a priority as other projects/tools. Let’s be realistic.(Note: I’m fully aware that Google is, basically, a harvesting and filtering company, so filtering [indexing, page rank etc] is key to its operation. But not in the “naughty or nice” way — at least not currently)

I don’t know, while it’s nice that the study was done and hopefully shared with Google, I just don’t see what all the fuss is about.

It’s as though people expected Google to somehow do a perfect job of this peripheal project. Why?

And has anyone examined, say, Yahoo’s protected search and see how much better/worse it does? I read nothing about this concept in any of the articles/blogs I read.

Hey, the Google porn filter could be 100 times better than Yahoo’s (or Teoma’s etc…); it could be 100 times worse.

Let’s see some comparisons, and then we’ll have something to talk about.

========

Note: I wrote to Dave Winer about this; he forwarded my message to Ben. Both sent back nice, sometimes-not-agreeing messages to my thoughts. Excellent. I like the give-and-take; it clears the mental cobwebs.

I guess where we still have to agree to disagree is that, while Google has a bunch of really smart techies, filtering is not high on their priority list to me. Dave and Ben still hold to the “surprised Google didn’t do better” stance; I’m not. It’s not on their radar (should be; profit center…..).

Ben’s note was the most detailed; reproduced below:

Lee,

Your thinking as to the context of this work, its value, and its reception

in the community and media is generally consistent with my own.

I do disagree somewhat with your suggestion that there was no reason to

think Google might do a far better job in this field than anyone else. They

have quite a talented bunch of engineers, with impressive results in major

fields (i.e. core search quality!). They also have a huge database of

content in their cache. And I, at least, have found it difficult to get a

good sense of just what AI systems can do and what they can’t — on one

hand, they’re clearly still imperfect, but on the other hand I’m sometimes

shocked by just how good they can be. All that’s to say — I started this

project with the sense that SafeSearch might well get a clean bill of

health.

My real focus here, though, did become the “push Google to be better with

this,” as you propose in your #2. The service has been in place for three

years without, I gather, any large-scale investigation of its accuracy or

effectiveness. (And I say that with full readiness to admit that there’s

lots more I, or others, could do; I’m not sure I’d call what I’ve done so

far a “thorough” investigation, given the millions of search terms and

billions of result pages not checked.) I’m hopeful that my work will cause

Google to reevaluate some of their decisions and, perhaps most importantly,

improve their transparency and documentation as to how the system works.

As to the “who cares” reaction — there’s always the potential, in blogspace

as well as in commercial news sites, for a story to get overblown. I’m not

immediately prepared to say whether that’s what’s happening here.

Personally, I think coverage like that on http://dognews.blogspot.com/

(see the 3:31PM post of yesterday; their deep/permanent links unfortunately

aren’t working quite right at present) isn’t such a bad thing and doesn’t

make the world a worse place!

Anyway, thanks for the clear thinking here and the explicit taxonomy of the

several approaches to this project. That’s a nice and, I think, helpful way

to present the varying perspectives here.

Ben Edelman

Berkman Center for Internet & Society

Harvard Law School

http://cyber.law.harvard.edu/edelman

Not a Troll

Hey, my entry below ‘dissing mySQL was not meant as a troll.

I said:

When I started this project, I sort of gave in and had it running against mySQL — while I hate it, it is the dominant open-source database (for better or for worse…). MoveableType runs against this, and MT is used all over the Blogsphere, so whatever…

As I began coding and wanted to do stuff, however, I quickly ran out of obscenities to use for this sad excuse of a database.

Sorry, mySQL doesn’t do it for me.

To me, mySQL = Microsoft Access. Both do a lot and a lot well. For 90% of the uses out there, this is all that is needed just about all of the time.

And both databases are incredibly simple to set up (hell, I’m still screwing with an Oracle install on one of my Linux boxes. What a pain in the keester!). Postgres is a little awkward to set up (“Is the postmaster running on port 564” or whatever errors) — you have to create users, initdb and all that.

For me, mySQL was a no-brainer (perfect for me!) to install: Installed (from RPM) and it was there. Bang. Simple.

Access, of course, is just “there.”

So both Access and mySQL have merits, but not for running a high-volume, highly transactional Web site.

Yes, my opinion. But look at how often Slashdot – Perl/Mason against mySQL goes down. Daily. I can’t imagine it being the code (though the code is pretty convoluted – download the TAR and look at it. Messy!).

Ditto for the new Internet meme I wrote about recently: Blogshares.com – PHP against a mySQL database. While the traffic volume may well have played a role in it’s at-least-initial instability/slowness, I think the database was a bad choice. First of all, I think mySQL is just not hardy enough for it, and this is a site that screams out for stored procedures – which are not supported by mySQL (and still will not be in the next [4.1] release).

mySQL seems to do well with simple selects, but even this “talent” is being usurped by Postgres, at least according to a fairly impartial test of the two by Tim Perdue of phpbuilder.net.

And while my Blogging tool will probably never move off my home (behind firewall blah blah) box, while it will probably never be used for actual production of my or anyone else’s blog, it is still designed to be used by multiple users with high volume.

At least that’s my goal — so why not set the bar high?

RE: blogshares – While I do think that mySQL is not a good choice for this site (lots for reads and writes; data-integrity issues; transactions issues [that mySQL does not support] ), I fully acknowledge that it’s tough to find a host that will run Postgres for you. (To be honest, I don’t know if it’s even possible…)

The Linux hosts all come with Perl, virtually all offer PHP (at least in the upgrade package).

Databases are a different story. Usually there is mySQL and sometimes mSQL — and the database option is often an upgrade. This is changing slowly, but still, it’s rough to get a good/straightforward database hosting on Linux. (This is also true on the Windows side, with different databases: Access/MS SQL Server/FoxPro.)

So the user is pretty much stuck with mySQL (mucho better than mSQL – at least from what I read…).

So I understand the choice. I just think it’s a bad one that is going to be problematic, and — guess what? — it already is.

That said, I see the reasons that MovableType.org went with mySQL:

  • Like it or not, that’s the database you can get from a Web host. ‘Nuff said.
  • While I have serious “issues” with mySQL, it does do well in “selects only” areas. And what is a blog? ONE person makes updates (inserts), the rest is reads except for a possible comments section. Like Access, mySQL is well-suited for this.

That still does not explain why Slashdot has not converted over to either Postgres or Oracle: This is a highly-transactional site.

In addition to users clicking around to stories and comments, there are users adding comments, users meta-moderating, user being added/edited and so on.

There’s a lot of shit going on.

And – about once a day, it seems — that “lot of” hits the fan…

Blogging Tools

Just for the hell of it, I”ve decided to build a blogging tool.

You know, a blogger or MovableType type tool.

Why? Because I can. And because I’ll learn stuff doing it.

I’m building it on my Linux box in PHP against a Postgres database.

When I started this project, I sort of gave in and had it running against mySQL — while I hate it, it is the dominant open-source database (for better or for worse…). MoveableType runs against this, and MT is used all over the Blogsphere, so whatever…

As I began coding and wanted to do stuff, however, I quickly ran out of obscenities to use for this sad excuse of a database.

I rebuilt tables in Postgres and I have not looked back. Everything I’ve wanted to do is easily handled in Postgres. Damn. Nice database.

One thing I don’t particularily like – and maybe there’s a way around it – is the string concatination operator — it’s “||” (without quotes, obviously)>

To me, that’s an “or” operator — I’m used to the ampersand (&) or a period (“.”) for string concatenation. I wouldn’t mind the double pipes, except that it is so much like an OR operator. Seems weird.

Example, a value of “Mary” in the first_name column. To make it “Mary Ann” one would enter:

Update tableName set first_name = first_name || ‘ Ann’ where [some restriction…name_id = 12 or whatever]

To me, this reads “Update first name to first name OR Ann where…..but that’s just me, I guess.

On the other hand, Postgres is so Oracle-like that there might be another way to do this that is more traditional. Still, it’s a little non-intuitive to me (I tried the ampersand and dot before hitting Google and finding a solution). Update: All my searching tells me that the double pipe — || — is the only string concatenation operator. Oh well. Can’t say I’m thrilled with that, but what the hell…

Note: mySQL’s way of doing this is to use the word fragment “concat” — now that’s WAY weird, to me…

Blogshares — Update

Wish this were not the case – and wish I did not predict it previously – but:

I seriously need to investigate some more reliable hosting with better capacity and on a different server to my mail server

(from Bloghshares.com, ~5pm CST)

The owner/publisher (what DO we call them? Not “webmaster”, not now…) also said the following: “BlogShares is going to be huge (honest!)” – and I agree.

For how long?

Internet meme, remember…

Blogshares – The Price of Popularity

While the addictive and increasingly popular blogshares site is not really suffering from a true slashdot effect (a spike in traffic due to publicity of some sort), it does appear to be groaning under the weight of its popularity.

Over the last day, a few hits have come up with the following results:

  • Server down but box still pingable
  • mySQL “too many connections” error (another Slashdot legacy … mySQL trying to run large sites….)
  • Server down/box unpingable (current state, as I type this)

While I think the site is great, in the “Internet meme” sort of way (remember “hotornot.com”?), it’s suffering from two main problems:

  • It basically seems to be a labor of love, and — while well done — doesn’t have the hardware it needs behind it. Could be unoptimized queries and so on, as well. Tough to do a perfect job with one person (or small group).
  • It’s getting way popular. This site is going to have to change its hosting option if the traffic keeps up.

But always cool to see a new concept out there, and this one really is well done, both from the concept through the details (how to pick “shares” and so on) through the look and feel. Outside of stability and some little things I find awkward or just plain problematic, it rocks. I wish I could do something as complex as well.

Hell, I wish I could do something trivial as well.

Microsoft and Tools

Had a little task I wanted to accomplish this weekend, and — since it was a pretty straight-forward file-system manipulation — I elected to go with your basic DOS batch file.

It worked fine, but since this script was not for me – but for a less savvy user – the issue of error trapping came up. Is this drive mounted, is the disc full, is the file locked, was the copy/move/delete successful and all that.

As usual, it was about five minutes for the actual work that needed to be done, and hours of error-trapping.

And it still wasn’t what I thought I needed: DOS just isn’t that flexible. Not to mention that a “black screen” popping up weirds out folks.

So I turned to VB (Visual Basic, for the uninitiated….)

Not counting the time it took me to reacquaint myself with VB (how do you refresh a file list on drive change???) and so on, using VB was a lot like using DOS (or – face it – any language): The initial work – for a user knowing exactly what to do – was relatively trivial.

Again, the error-trapping was the tough part. Think of all that could go wrong and then build in this exception handling, reseting variables, setting error messages and so on.

But with VB it was more of a problem as to exactly what I wanted to do/say, as opposed to the limitations of the language and its tool.

I’d forgotten how good Microsoft tools – mainly, Visual Studio – are. Very nice. (Note: I have Visual Studio v6; sure, I’d like Visual Studio .Net…anyone want to buy it for me??? I don’t use it enough to justify it right now.)

I’ve used a lot of tools – maybe not as many as hard-core programmers, since I’ve been doing a lot of scripting languages – but I have used a fair number.

Microsoft’s tools are, far and away, the best of the bunch. No wonder — it keeps the developers happy and coding (Image of flop-sweating Steve Balmer bouncing like a monkey on stage screaming “Developers! Developers! Developers!”)

Let’s take Java tools: Borland’s JBuilder is pretty good, but the Personnel Edition I have (free) is clunky. I’ve used Forte, which sucks. My favorite was Syamntec’s Visual Cafe. It was very much like a Visual Studio interface, a nice debugger and so on. Symantec then sold this product to WebGain, which – since it is in the process of going under – to TogetherSoft (which appears to have been acquired by Borland…shit, I can’t keep up….). I really haven’t seen the product since v1.1, but it was a great tool. (Hmm….looking at the Togethersoft site — labeled “Borland” — I can’t find a reference to Visual Cafe. *sigh* Looks like they killed it…..)

ERwin, as a data-modeling tool, is incredibly useful and I love what it can do, but — face it — the product looks like a 16-bit app. Ugly, clunky, non-intuitive but powerful. Not exactly a four-star review…

On the scripting-language side, Allaire’s (oops! Macromedia’s) Homesite and Cold Fusion Studio are, to me, the best Web editors out there. But I’m a hand coder: For HTML/scripting use, I just need a file browser, a slew of (customizable) menus and good color coding. I don’t need validators (I have the W3C) or WYSIWYG tools. I hope Macromedia keeps these tools alive…

Speaking of WYSIWYG tools, Dreamweaver is the best of the bunch that I’ve used.

I’ve used Dreamweaver most of the times I need a WYSIWYG tool (rarely); I like the way it doesn’t screw with my code. FrontPage — a Microsoft tool — is a disaster, but I can see it’s appeal for the very newest of the newbies. GoLive is fairly strong, but I still like Dreamweaver better. Probably because it’s more code-oriented than GoLive (Adobe product – more visual/drag-and-drop, which makes perfect sense).

Good tools are hard to come by, which is why some many people are so protective of their tool choices. Part of this is human nature – the refusal to change and the inability to admit wrong choices made – but part of it is just that, when you have a good tool, you hang on to it. You talk it up. You hope it lives (unlike Visual Cafe, I guess…damn…).

Still Learning

I ran across an old resume of mine the other day (using frames! – what the hell was I thinking?!?).

It’s almost exactly five years old; it was the resume I used when I left Aberdeen and joined cars.com.

Outside of cringing at the look and feel and other issues, the interesting part was the “What I’ve Done” page, which is not a listing of achievements (“deployed N-tier application to….”) per se, it’s a listing of what tools I’ve used.

What I’ve done coding wise, in other words.

OK.

I split this heading into two columns: “What I Can Do”, and “What I Can’t Do (Yet)”.

While not traditional, nor am I. So it works.

I was heartened to see that, over the past half decade, I’ve expanded my depth of my skills in the “What I can do” category (example: way better and more sophisticated with JavaScript), but the really nice part was to see that just about everything I listed in “What I can’t do (yet)” I can do today.

Such as:

  • SQL: I noted I was (in 1998) currently learning it. Today I do queries in my sleep. While I still don’t consider myself a SQL guru or anything like that, I do write often complex queries, create views, script out table construction, stored procedures, views and so on. I’m a million miles from the 1998 “select * from tableName”. That’s a good thing.
  • Windows NT Administration: While I’m still not an NT admin – nor do I want to be one – I spend/have spent a lot of time with NT admin: Servers (iPlanet and IIS), databases (MS SQL Server & mySQL) and home networking. I have up to five machines on my network at any one time; two NT (Win2000), one WinME, two Linux. Again, substantial progress.
  • CGI: By this – I assume – I meant Perl CGI’s. And now that’s easy. I’m still learning Perl (isn’t everyone, including Larry Wall??), but now I user it more and more, both for CGIs, and – usually – as a scripting/parsing tool. While a page of Perl code often looks like someone threw up a mouthful of punctuation on the page, it’s a killer language. The more I learn it, the more I like it.
  • Active Server Pages (ASP): A recent addition to my computer tool belt. How did I teach myself it? A “hello world” page? No, a content-management system, complete with an admin section. How about that? Nope, it’s not perfect, but I’m learning still….
  • Server-Side Java: Sure, still weak on this, but I’ve written Java apps, JSP applications that access EJBs (Enterprise Java Beans) that I’ve written by hand. Written servlets. Not bad for learning on one’s own.
  • Visual Basic: I used this a great deal at cars.com to build little widgets that made the day easier – auto template creation, FTP/parser programs and so on. Again, not a wizard. Again, I’ve been there, done that now.
  • Perl: I think I covered this above. Having my own *nix boxes here makes this much easier (Perl runs better on *nix than on Windoze, at least to me….)
  • Unix: I’ve now worked at companies that ran either Solaris or Linux; I’ve had to use the shell extensively. And I have two boxes dedicated to this sitting right here in the home office. You learn by doing…. Hell, I have about nine shell scripts that run each night for backups (yes, I should consolidate all the CRONs into one package; in time….) – both Linux to Win2000 and the other way. So if any box craps out, I’ve got a backup on a different box/OS.
  • CSS and DHTML: Five years ago, I said the promise but I really didn’t get around to really learning them was because it wasn’t worth spending much time on, as the browser battles/differences made it somewhat worthless to deploy. I think I was correct: Only in the last year or so (at best!) are the browsers similar enough (standards…sorta…) so one can deploy CSS and DHTML (CSS & JavaScript). I’m very good at both right now. It’s very neat stuff, allows one to build very flexible, extensible sites.

I still have a lot of new things to learn (more database things [like replication], C#, Python and so on), and a lot of depth to add to the breadth I do offer.

Agreed.

Still, it was nice to see that I have pretty much nailed all the things I identified five years ago.

And the last five years of learning doesn’t even begin to touch the other skills I’ve added, some in depth, some not. Additional skills include, but are not limited to, the following:

  • Shell scripting
  • Web server administration: IIS, iPlanet (Netscape), Apache
  • Cold Fusion (eep! – not on the radar five years ago, has been a staple of mine almost since that time…)
  • Wireless networking
  • Blogging
  • Database design/construction/maintenance (including Erwin)
  • Lasso (not my choice, but I learned it to help shepard a product along)
  • VBScript – Implied by ASP, but not required (can use JavaScript)
  • PHP – and lots of it….
  • XML – enough to be dangerous/confused, but I’ve built XML parsers and worked with XSL and SOAP
  • HTML 4.01 – transitional and strict. Very different from HTML 3.02 in many ways
  • XHTML – little done, but I grok it

It’ll be interesting to see what the next five years bring, for me and the Web in general…

Web Maturation: The Death of the WebMonkey

I’m a WebMonkey. An HTML jock, a JavaScript code, a Photoshop pixel-pusher.

I can architect a front end so it is flexible, extensible, comprehensible (i.e. maintainable) and usually user-friendly.

I’m way past the point of being dangerous with my SQL/programming/backend knowledge; I can do a lot of it with alacrity.

At the same time, I’m an anachronism:

  • I excel at HTML: So what? Who needs that anymore? GoLive, DreamWeaver MX, FrontPage etc. Hell, export/publish from MS Word. Will the code be ugly? Yes! Will it be slower? Yes? Will it matter? NO! – because the folks using these tools are not the Googles of the world, trying to carve a tenth of second off a load time. These are users who have/want a site and with few bells and whistles. I can’t argue with that.
  • I suck at Photoshop: OK, let’s qualify. I’m better than 90 percent of the population with this omnipresent tool (90 percent don’t use it…), I’m probably better than half the current Photoshop users: Hey, I’ve done magazine production and so on. I grok it. That said, I know that I can’t design a killer site graphically. I don’t have the real artist skillset. The built-in templates to a lot of these tools work fine for people (doubt that? look at all the MoveableType folks using essentially a built-in template for that tool!!). I can’t argue with that.
  • I’ve a broad range of skills: So what? Today you get hired, for the most part, for a very specific tool set. Perlscripts written to convert Oracle 8i data to flat XML files for Linux. ASP with SSL for financial firms. Etc. The broad range of skills is enormously helpful – especially to the hiring company – but it really does not play into the hiring process, let’s say. They want to be able to slap someone into a chair and have them produce by the end of the week, at worst. A new employee’s “extra” skills only come into play after the hiring process, for the most part. I can understand this to a large degree – sure, I’ll be able to pick up D+++ (how do they/I know???). *sign*
  • I can program in a wide range of scripting and compiled languages: The caveat here is that I have not done most during the job. I have done a lot of the work for jobs, but usually just to make widgits or whatever for the job that folks don’t know (VB, Perl). Or I have done extensive work with some tools, but I can’t really point to a visible project that showcases my efforts (VB, Perl:SOAP/XML, JSP, ASP, PHP, Postgres, mySQL, Linux/Linux admin, IIS admin, Netscape [iPlanet] admin, MS SQL stored procs, shell scripts, DHTML…..). My visible work – while the stuff I do currently do best (HTML, JavaScript, CSS, ColdFusion, some Perl) – is fairly small compared to what I can really point people to or is part of my offical job description (…shit…). So why should they believe or (if they are not jaded) investigate that which is difficult to see? And, sure, I’ve programmed applets/apps/EJB/JSP in Java. Want me doing it full time? I honestly don’t know, to be honest…. so why should the employer? Again, I can’t fault that….
  • I don’t lie.: Kill me now. Honesty is the poison of employment. On both sides, to differing degrees with differing employees/employers. Life goes on; get over it. Practice: Sure I can build a 12-D, browser-based, DNA-driven wormhole flight simulator in three weeks that is written in KlingonScript….and it’ll be fast and sexy
  • I can do some stuff that’s needed : Such as CSS (1&2); I understand HTML Strict vs. Transitional etc. XHTML. Big whoop. Who gives a rat’s ass?

Anachronism reality: Big companies won’t hire me because I don’t have 10 years of experience with [fill in tool that has not been around for a decade]; small companies will flinch because they will be lookign for someone to do everything (IIS config, phones, installs, intranet, extranet etc) and I’ll answer honestly saying that I’ve done some, haven’t done others and [OOOPS! TOO LATE — I said “haven’t”: I’m toast].

Oh well oh hell…

Webmonkeys of the World: Hmm. I’ve no good advice. No advice at all.

The Web Grows Up

One of the things I do in my capacity as “a person who uses the Internet a lot” (such as for a paycheck) is read, trawl, search the Web.

I see a trend that is not at all remarkable, and one that I believe that I’ve commented on before.

Or not. Whatever.

The Web is growing up: I commented on this recently (March 15, regarding BlogLash – there, I found it….one instance of…).

One result of this maturation is maturation of the ways in which the code that is parsed by servers and sent to your browser is generated.

  • Early days: Notepad, vi (!)
  • 1996ish: First editors specifically for HTML appear; one still needs to know how to code tables and so on to survive.
  • 1997-1999: Major strides in dynamic Web sites have happened. The Evil that is MS Front Page and other (some better, some equally nasty) WYSIWYG “Web-development” tools (actually just HTML tools then) appear. Bring HTML to the common man; geek still needed to tweak those nasty tables and other isues. At the same time, the first divide happens: Backend developers (Perl and Java the main dynamic tools of note) and front-end developers (HTML jockeys, often with programming/SQL skills).
  • 1999-2002ish: Programming becomes more and more specialized; backend tools (application servers, databases, datafeeds, search tools) take center stage; the presentation layer is the least complex matter. Scripting languages fall generally into five camps: 1) HTML (static sites), 2) ASP, 3) JSP/Java servlets, 4) PHP, 5) Perl (the old standby…). ColdFusion is a presence, but does well mainly with smaller sites that need dynamic content. Perfect for this use, including intranets (so no one really gets to see them).
  • 1999-2002ish Redux: At the same time HTML creation is deprecated, the rise of CSS and DHTML (CSS + JavaScript) takes off, putting a little more emphasis on the front-end development. Tools have not caught up with the technology (so need bodies). Thank god the browsers are finally getting close to similar…now if only all Netscape 4.x users will log off forever…
  • Today(ish): While Web Services has been a buzz phrase for some time, some work is actually getting done in this area: Not as much as people expected by this date, but I don’t think anyone can deny that Web Services will be huge in the future. How near this “Web Services future” is and what form the Web Services will take is way up in the air, but the over-arching concept is sound – a COBRA-type system that will allow dissimilar systems to talk to each other and garner data/content from each other without requiring a specialized parsing/access system for each. Think of it as a phone that translates what you say into the language the person on the other end of the line speaks, and converts their responses into your language. Neat.
  • Today (redux): Again, there is a schism in what is needed. Backend predominates (databases, SOAP, XML etc), but the scripting languages that actually send the material to the user (or is first parsed by server/application server) don’t yet have the robust tools needed. Actually, the backend tools don’t have the robust tools to send the material to the front-end. Yet this is sorta ignored by most back-end developers, which is why I run accross so many JavaScript errors on my Web travels.

But I babble (so?).

The upshot of all this is that the emphasis, for better or for worse, is once again squarely on the backend. The hell with the presentation layer (except for the look – NOT the feel [usability] ); that’s gravy; that can be fixed later.

To a large extent, this is not a bad way to look at things.

And – as tools get better on the backend – the tools will help the front end. I expect MS to be a big part of this, even though I hate its Front Page, as mentioned above. (Bad code generated! Bad! Bloated!)

OK, those are the (my) facts.

What are the results of this? More to follow…