Stories by Tom Yager

RTM means "release to mob" for Vista SP1

I've learned in recent days that my perceived paucity of Vista in the wild may be impaired vision (of the ocular variety) on my part. The merest whisper of the impending delivery of the first Service Pack for Vista kicked off a public rending of garments the likes of which I've not seen. Granted, Vista Service Pack 1 saves you the headache of downloading dozens of individual Vista hot fixes, and Microsoft sweetens the lot with feature and performance tweaks, but I wouldn't say it's the Second Coming.
I am too mystified by the public reaction to Vista Service Pack 1, and Microsoft's reaction to that reaction, to do accurate reportage on the details. Here's what I do know: Microsoft announced that the RTM (release to manufacturing) of Vista Service Pack 1 will take place in March. This I like. New Vista systems shipped from manufacturers and discs supplied to developers and volume licencees will include the Service Pack. Rolling up fixes means that new Vista users will get one big download from Windows Update instead of dozens of little ones, and there won't be any wondering about whether you've received all the critical fixes. For corporate fleets, IT typically makes sure that accumulated hot fixes are either folded into the system image that IT uses to initialise new clients, or they're pushed out from centralized management servers. But for the rest of us, a Service Pack is a major convenience. And it's free. It's not like you'd turn it down, but you wouldn't trample your grandmother to be first to get it.
The moment that Microsoft dropped a hint that a Service Pack for Vista was coming, anticipation created a buzz more deafening than that generated by Vista's release. Erstwhile leaks of Vista SP1 surfaced and were quickly put down but not before they had been dissected, screenshot, and "reviewed". For reasons that I cannot fathom, Microsoft apparently offered a release candidate of SP1 (a beta of a roll-up of patches?), and it's said that Microsoft seeded discs among favored bloggers and media outlets. The have-nots seethed as the haves boasted, and I imagine that many of my readers, like me, were too distracted by a mix of real work and real news to take notice.
Joining this party late gives me the benefit of a simpler perspective. I see this as a sequence of three events that spins a tale more cautionary than amusing. Microsoft engaged the community, as is now its laudable practice to do, on the engineering of Vista Service Pack 1. Those who had been so engaged, and others who wished they had, interpreted Microsoft's announcement of the March RTM as a signal that the software was ready, and everyone had their own rationale for first place in line. The kicker is that shortly after Microsoft told the impatient rabble to wait it out, and that there were still some device compatibility issues to iron out, Microsoft sheepishly and apologetically put Vista SP1 out for public download well in advance of the RTM. Release to manufacturing became release to mob.
Microsoft has discovered the dark side of making process and engineering transparent to the public. I've praised Microsoft's trend- setting model of community engagement in new engineering efforts. The company's responsiveness to public feedback, gathered in part through unfettered employee blogs open to comments, shows in Visual Studio and Windows Server 2008. But as I said, the idea can be taken too far, and I think that's what has happened with Vista SP1. Some work at Microsoft needs to take place behind closed doors. Giving everybody a visitor's badge to the Redmond campus makes great PR and exciting give-and-take but not always great strategy. SP1 turned into a case of take-and- take. Making sure that those who complained the loudest got SP1 first perhaps became more important than making room for those precious final builds, that last proofing of the documentation, that one device or chip set driver that was just a day away. We can't know what got sliced out when the klaxon went off and the Vista team had to do a hasty upload of the code to Akamai.
If Microsoft did make a gift of early SP1 access to those it favours, perhaps under what it understood was non-disclosure, it may understand now that this approach to marketing and relationship building creates more liabilities than benefits. It's a lesson that Microsoft has had the opportunity to learn before. Whether or not Microsoft brought this on itself, it's clear that what good there is in Vista SP1 has been buried by bad press. I'm content to wait until March, and I believe that those who plan to put SP1 in production will likewise wait until it reaches them. As one Microsoft blogger points out, Vista without SP1 is still Vista. SP1 doesn't make it a new OS, and if you've let Windows Update auto-install your critical fixes, you're missing out on very little.
What should Microsoft brace for next? A tidal wave of moaning from the entitled over the absence of their pet fixes and features. Work should already be under way for Vista Service Pack 2, where we're more likely to see what cooler heads had in mind for SP1. I only hope that Microsoft lets its engineers, not the blogosphere, decide when SP2 is ready to ship. I'll say it again: Perhaps there are some projects that don't need a blog.

The mobile applications gold rush has begun

Adobe, AOL, Google and Yahoo see smartphones as fertile ground for rich and hosted apps and services.
Futurists' dreams of the wearable computer, constantly attached to its wearer and to the world, are realised. As foreseen, that technology has changed culture and, arguably, humanity. Professionals rely on smartphones and PDAs as more than aids to recall and communication.

Microsoft's new server OS hits paydirt

Windows Server 2008 (WS08) has been doing an excruciatingly slow public striptease for five years. Now that the last patch of linen has been peeled away, we'll have a chance to see what thousands of man-years yields in terms of innovation. WS08 is loaded with Microsoft's good ideas, almost the way commercial Linux is, and this time Microsoft added the ingredient of wide-open, real-time public engagement in the development process.
The people doing the talking in Microsoft's blogs are high-ranking types who could easily opt out of talking to the public. In a way, WS08 is a product of the collaboration between the world's largest software company and the people who use the products they make. It's pretty cool, and it goes a long way towards repairing Microsoft's reputation as an opaque, insular entity. The idea of having someone who says "I'll get back to you" actually get back to you about bug reports and feature requests is sort of a mind-blower. Microsoft's take on community isn't perfect; not all who raise their hands are called upon. That's not possible for people whose primary job functions involve getting things done, and I wonder how Microsoft manages triage on the mountain of feedback every new public beta must produce.
I have to hand it to Redmond: Windows Server 2008 gives the people what they want, albeit spread across several products. There are only so many eggs that one basket can hold, and there is more money to be made selling one egg at a time. There's nothing wrong with that business model, and it helps defray the cost of inserting buyers into the product development cycle. I think that Windows Server, once you build the essential Windows Server System components around it, is grossly overpriced. WS08's path to release is a model worthy of emulation, but it seems that Microsoft didn't get around to asking buyers what they'd like to pay for it. When it comes time for buyers to write the check for WS08, the adage "be careful what you wish for" may spring to mind.
There is one way that IT can keep Microsoft, or any vendor, from reaching into their pockets: Don't buy it. Assuming that you don't work for Microsoft or its PR network, if I asked you to name five WS08 features that have you salivating at the prospect of upgrading your Windows Server 2003 servers, I think you'd have to hit Microsoft's website to come up with your answers. Only the numbers will tell the tale, but despite WS08's certificate of community, it may be a victim of the Vista Effect.
Vista differs from WS08 in that it is the product of Microsoft's bad old "we'll let you know what you want" paradigm of software design. There is much goodness baked into Vista, but I'd much rather see Vista's yummiest bits back-ported to Windows XP. You see, Windows XP is everything I need in a Windows client OS. Ask me what I want and I might give you a list that puts Santa's naughty and nice scroll to shame. I like to play and discover, and Vista has a few of the kind of gems that can make a video game with an explore-and-gather theme so engrossing. But I have to say that Windows doesn't lend itself to this. Whether Microsoft was aping Apple, whose OS X reveals something new and relevant every time you sit down with it, Vista showed that Microsoft is no better at making OS presentation and management layers enjoyable to use. I'm not motivated to hunt down Vista's wonders. Move files around? Check. Connect to the internet? Check. Launch apps? Check. That's what I need and expect from a Windows client OS, and based on that, Windows clients hit their apex with Windows 2000 Professional.
Microsoft cursed Vista by trying to move the goalposts based on internal discussion about what users would want desperately if only they knew they could have it. WS08 tries to move the goalposts, too, but Microsoft had external discussions that yielded the public's take on its fondest desires. When asked what it wants, the market is not shy about demanding the whole world and a Dove Bar. But even though WS08 is everything buyers asked for, at the end of the day there has to be a defensible reason to reach deeply into one's pockets to upgrade to a Windows server OS that will likely be tasked to do only what Windows Server 2003 systems do now. IT will download the trialware of WS08 in record numbers. Reviews will range from good to ecstatic. In the end, if the paid-off car you're driving now is still a nice ride, you'd be foolish to take on new payments. At some point, Microsoft will force the issue by declaring the end-of-life on Windows Server 2003, but that's a story for another day.

Self-aware virtualisation would be a blessing

Organisations that leverage virtualisation as a means to provision and reallocate pooled resources still face the challenge of gaining an intimate level of knowledge of application behaviour and runtime requirements.
It's a pity, isn't it, that operating systems and virtualised infrastructure solutions can't just know what applications need and allocate resources to fit, instead of requiring a lot of observation and scripting.
Virtual infrastructure will undoubtedly take on ever-smarter heuristics for automating the distribution of computing, storage and communications resources. But setting this up as the only alternative to scripted agility presumes that software within a virtual container will always play a passive role in the structuring and optimisation of its operating environment.
I submit that to extract ever more value from virtualisation, software must take an active role. However, I still hold to my original belief that software, including operating systems, should never be aware that it is running in a virtual setting. I want to see software, even system software (an OS is now an application), get out of the business of querying its environment to set a start-up and, worse, a continuous operating state. Doing this severely limits the ability of tasks to leap around the network at will, because an OS freaks out if it finds that its universe has changed in one clock tick. In the least disruptive case, if its ceilings were raised, the OS instance (and, therefore, the mix of applications running under it) would take no advantage of the greater headroom afforded by, say, a hop from a machine with 2GB of RAM to one with 32GB. So how can software be a partner in the shaping of its virtual environment without trying to wire in awareness of it?
Clearly, software must be able to query subordinate software to ascertain its needs. The technology exists now to do this at start-up. When commercial software, or software written to commercial standards, is compiled, optimisation now includes steps that give the compiler a wealth of information about the application's runtime behaviour.
One is auto-parallelisation. This stage of optimisation identifies linear execution paths that can be safely split apart and run as parallel threads. That's some serious science, but the larger the application is, the more opportunities there are for auto-parallelisation, and on multicore systems, the win can be enormous. The analysis that a compiler must perform to identify latent independent tasks could go a long way towards helping a VM manager decide how an application can be scattered across a pool of computing resources. If the ideal virtual infrastructure is a grid, then the ideal unit of mobile workload is the thread. If the compiler finds that an application is monolithic, this information, too, could be valuable, signalling that a process can be moved only as a whole.
I'm more excited about two technologies that apply runtime analysis to the goal of optimisation. A two-step optimisation technique involves compiling the application with instrumentation for detailed runtime profiling that produces a detailed log of the application's behaviour. This log, plus the source and object code, is pushed through the compiler a second time, and the resulting analysis creates potential for optimisation bounded by only the intelligence in the compiler.
If this intelligence were available at runtime, then a virtualisation engine wouldn't need to wonder so much about whether a process, thread, block of memory, open file handle or network socket could be safely relocated. The kind of surprises that complicate planning and automated reallocation of resources would be significantly reduced.

Getting back up to speed with benchmarks

Everyone needs an avocation they pursue with passion. For you, it may be cars, clothes, Greek food or detective novels. Now that Ahead of the Curve is reaching a larger audience online than it could reach in print, there are robot makers, solar power fanatics, and hard-core gamers among its readers.
When your favourite pastime yields value while you're on the clock, you are a lucky person indeed. For me, science, most any science, is my passionate avocation, and I'm fortunate to have inhabited the untitled role of staff scientist for InfoWorld's Test Centre for several years. Every few years, I thrash about for an adequately descriptive title that fits on my business card. I took up the title of chief technologist before it found use in IT. House geek, IT enthusiast, Property of Apple, jack-of-all-trades — so many ideas have been floated. But really, my gig is all about turning science into strategy for IT, and for the users and buyers of commercial and professional technology. I'm a firm proponent of planning from the gut. However, the only people who can operate that way with any hope of good results are those who have applied science with success for so long that it has become instinct.
I have been operating on gut for a while on systems, storage, networking, and the basic food groups of IT, reaching out to things like mobile devices for the challenges that keep me fired up. But while I've been tackling that and indulging a fascination for physics and mechanical engineering, one work-related topic that I had relegated to "got it down cold" gave me a rude bite by way of reminder that I'm never too smart or too experienced to go back to school.
Being of a scientific bent, it's natural that I've been a benchmark fanatic since I started writing about computers. There is a great deal to be learned from benchmarks once you understand the gears that spin to make the published numbers light up on the tote board. You learn a lot about vendors and what they think of their customers from their use of benchmarks in advertising. I did my time at the bottom of the benchmark food chain, writing benchmark code, but just before I started working for InfoWorld, I got the ultimate correspondence course in performance characterisation: A complete set of CDs from SPEC (the Standard Performance Evaluation Corporation), its entire library of killer benchmark tests. I carry those discs to this day, the originals in their white paper envelopes, everywhere I go.
SPEC's software and the guidance that accompanies it form my bible when it comes to test methodologies, standards for quality coding, and straightforward statistical analysis, but also organisational transparency and stringent ethics. To dip into the SPEC library, which has been my avocation, is to sit down with the sharpest minds in computer and statistical sciences. It is an awesome and endlessly varied code base, put to use for a very lofty purpose.
Thanks in large part to SPEC, I can hold my own with system, chip and development tools designers, where SPEC performance baselines are taken as understood and SPEC terms are part of the vernacular. I put the science and statistics of performance characterisation in plain language for readers, using lessons that I learned from SPEC. I added a practical angle to my scientific understanding of compiler optimisations, processor scheduling, CPU cache utilisation, comparative efficiency of message passing techniques, and other deep concepts by building, running, profiling and debugging SPEC software in on- and off-label capacities. Thanks to SPEC, I can speak expertly on matters of performance characterisation, its relevance in buying decisions, and its usefulness in tracking trends such as the migration of high-performance computing principles to IT.
I made good use of the years I spent passionately exploring and dissecting that SPEC library. Readers' interest in benchmarks waned as x86 took over, and no one figured it made any difference whose brand was on the front of the box or on the chips inside. I took one last tilt at benchmark science back in 2005. I volunteered to use SPEC CPU2000, the 800-pound gorilla of benchmarks and just about as friendly, in a review. I knew this cold, but I discovered that my discs, or my knowledge, or both had aged in five years to a point where I couldn't get the suite to compile any more. Whatever the reason, when I published the numbers I could get, apologising profusely to SPEC for violating its rules, SPEC was furious but readers didn't notice. I might as well have run baseball scores. After that, I couldn't set fire to a passion for benchmark science if I soaked it in gasoline. I resolved to keep weighing in on discussions of benchmarks as a seasoned snob, but it wasn't something I did for fun.
One must never assume. Apple carried benchmarks back into favour as a means of marketing to mainstream buyers, capitalising on the fact that most prospective buyers had no clue what the numbers meant and little motivation to find out. This caught on, and now you will find SPEC CPU results in retail brochures. This is good for SPEC, but how can people make decisions based on statistics they don't comprehend? I must rail against this, I must stand up, I must...
I must find out what SPEC CPU2006 is before I go running my mouth about how it's being used and interpreted. I got the benchmark from SPEC, hunched over an eight-core Barcelona server for which results had not been published, and hung in with the requisite pad, pencil, sweaty clothes and fast food. More than one month and 118 runs later, I can now hold my head high as a master of SPEC CPU2006. I learned that while I was relying on what I knew and what I'd read, everything changed. Now that I can run and, more importantly, explain the tests that my readers are studying to make comparisons among vendors, I'm fired up about benchmarks.
The trouble with the thrill of scientific discovery is that it's never enough. I look mournfully at my library of SPEC discs; they seem pretty tired now. The code in SPEC CPU2006 so outshines it and brought so many new insights to my mind, that I'm reminded of the slow, rough road to mastery that the original SPEC discs started me on. I'm relieved to find myself eager to start down that road again, even though it means discarding a lot of hard-earned knowledge that's become a bit overripe. As the banner in the school cafeteria says, science is fun. When you can bring that attitude to work and end up making smarter decisions with the new knowledge you've gathered from the most trustworthy source on the planet (you), science is good for your career, whatever it says on your business card.

Some Macworld predictions right, others wrong

At this time of year, bloggers come out of the woodwork with claims that they have the inside scoop on Apple's product strategy — specifically, the products that Steve Jobs will unveil in his keynote at Macworld Expo. Prior to this year's show, the pinheads really outdid themselves. A bogus "leaked keynote" was distributed, and the number of losers who bought it, vetted it, passed it along, and gave it play in supposedly legitimate outlets broke records. At least when I take a stab at predicting Macworld Expo, you know that it's based on nothing but my speculation and desires. Sometimes I don't score well — I pretty much pooched my Macworld Expo keynote predictions this year — but at least I turn in my own homework and I fess up to what I got wrong. Not only that, but I give you a microscopic look at the flaws in my reasoning.

Eight-core Xserve shows Apple's server strength

Apple rarely lets any product sit still for long, so when something in its lineup goes untouched for a while, it prompts speculation about Apple's commitment to it. Consider Xserve. I do, and sometimes I feel like I'm the only one who does. Apple's Xserve went Intel with the rest of the Mac line, but instead of keeping pace with x86 rack server competitors and keeping up with Intel's latest silicon like its Mac client kin, Xserve hung back. It's been a two socket, four-core server in an eight-core world. Ever since the Intel transition, Apple's been quiet on the marketing front for Xserve, too. It looked like Apple might have relegated its server to the back burner, but that didn't jibe with the proud noise that Apple has made over OS X Server Leopard, its first true Unix server OS. A shiny new OS on server hardware that had lost pace with the market? Perhaps Apple was quietly thinking what I've been quietly advising curious buyers to do: Use Mac Pro as a server instead.

AMD's Spider weaves an impressive web

When AMD acquired ATI, the very first thought in my head was that it would stage a triumphant return to the total platform — CPU plus chipset — business. On November 19, it finally happened. AMD paired its new Phenom 64-bit, quad-core, single-socket CPUs with AMD/ATI jointly designed 7-series chip sets to create the Spider platform. That name makes for fetching marketing artwork, but it's also descriptive: Spider is agile on any terrain, quick to react, and yet still as a statue when there's no work to be done. And thanks to ATI, it has exceptional vision.

Microsoft gives developers a holiday treat

Struck sullen by seasonal affective disorder? Soaring airfares got you grounded? Well, Windows developer, be of good cheer, for Microsoft has delivered you from your holiday doldrums. The retail release of Visual Studio 2008 is available now, and that means one of two things: Don't bother showing up for work after the holiday break unless you know it off by heart, or start planning that "training" junket for early in actual 2008.

A MacBook Pro gremlin is vanquished

Nothing gets my ire up as quickly as when a hunk of technology takes on the characteristics of a stubborn animal, especially one more stubborn than I. I've spend the better part of a week struggling, with little success, against a goblin that infested the innards of the MacBook Pro in my possession, and in the course of his exploits managed to shred months of hard work.

Apple's new disaster recovery not so hot

I always do my best to turn misfortune into opportunities for enlightenment, and oh, what enlightenment the past couple of weeks has placed within my grasp. When the MacBook Pro loaned to me by Apple slipped into a coma during a full-volume image backup and subsequently died in my arms, I was forced to deal head-on with the impact of Apple's switch in suppliers and with an irrecoverable loss of data and productivity — a hardship I've never faced in five years with Macs. I lost a full month's worth of work, research, and creative projects, along with every application that requires registration keys and online activation.

Leopard: a system you operate

For the first time since I met OS X back in 2002, Apple is challenging Mac users to raise their productivity to a new level.

Sun's pooled approach to storage is a breakthrough

You've read about ZFS, the advanced storage management facility baked into Sun Microsystems' Solaris Unix operating system. It is Sun's invention, yet Sun has opened it to the world by including it with the mass of Solaris code that Sun has open-sourced.

Leopard: a beautiful upgrade

Apple's announcement of the delivery of OS X Leopard (release 10.5 of Mac and Xserve operating systems) marks the public debut of an engineering achievement that dwarfs iPhone, iPod, Windows and Linux. No other PC server vendor, with the notable exception of Sun Microsystems, invests so much time and manpower in its system software. Leopard is magnificent code architected from the user in, rather than from core technology out.

The next best thing to Mac OS X

It is no small source of consternation to those of us who have grown attached to OS X that systems that would run it best — primarily, those with processors created by AMD — will never do so unless Intel fails to deliver on some promise made to Steve Jobs. It's doubly difficult for me because I'm a devotee of both OS X client and server operating systems. It's triply difficult because Leopard is system software to get hot and bothered about, and I have a solid Leopard doesnt' ship until the end of October. There's an Xserve here with Leopard's name on it and a Mac Pro pining for it, and they're about as patient as I am. Yes, they speak to me. If you own a Mac, you understand. Even with the wild wonders that Leopard will bring, and has brought in secret to those with the foresight to buy Apple Developer Connection Premiere memberships, there are things that OS X won't and may never do. I've already mentioned the biggie. There is a dual-socket Quad-Core Opteron at my feet, literally at my feet, running SPECcpu 2006, and it would be a happy box indeed if OS X Tiger Server was making a scorching day for Barcelona. I have an Intel Clovertown Xeon server that thinks, given Intel's cosy relationship with Apple, it is entitled to run OS X. It's not to be. Alas, and bummer. As an aside, when I broach this subject, I always draw comments from readers who tell me that dude, if I want to run OS X on an AMD machine, it can be done. To all who would offer this helpful advice, I tell you with the greatest disdain I can muster to stuff it. You're all making Apple wonder whether its open source program, which is exploited to pirate OS X's way onto non-Mac systems, is worth the trouble. I have the same advice for my colleagues in the media who treat every advance scored by OS X scofflaws as legitimate news. Back to the subject at hand: to wit, what a law-abiding individual is to do when deprived of OS X. As a developer and system administrator, I need to be able to run multiple instances of OS X, different versions if need be, or simply for purposes of isolation and consolidation. I can do this with Windows, Linux, Unix, OS/2 and DOS. But despite how technically simple OS X's design would make and has made virtualisation (PowerPC OS X's MacOS Classic environment relied on virtualisation), Apple won't let OS X run as a virtual guest, not even on a Mac. So what's an OS X nut to do when he can't tap the Mac's active and welcoming community, sumptuous documentation, gratis dev tools, out-of-the-box richness, and massive library of free and affordable software wherever he needs it? I recently found the answer: Run Solaris. A chorus of groans is now rising from my readership. For some reason, Solaris, particularly Solaris on x86 and 64-bit x86 systems, suffers from a reputation problem. That used to make sense when Solaris x86 was the performance- and compatibility-challenged stepchild of SPARC Solaris, but Sun has long had its x86 Unix in lockstep with its SPARC operating system. While Solaris has none of the no-brainer usability and manageability of OS X client and server OSes, I'm finding Solaris to be an increasingly comfortable workmate with enough similarities to OS X to deserve some attention. For starters, Solaris is legitimate Unix and legitimate open source. OpenSolaris is Solaris to a far greater degree than Darwin can be equated with OS X. Even though it is open source, OpenSolaris plugs into the same free, automatic software update system that commercial Solaris customers enjoy. Solaris' GNOME-based GUI is a good thing since Solaris is a System V Unix, something that takes OS X users a while to get used to. And StarOffice 8, which is now standard with all versions of Solaris, has strong Office document compatibility and a user interface that Office users find familiar. Like OS X, Solaris has free developer tools, and impressive ones at that. Its compiler, debugger, and IDE suite, Sun Studio 12, incorporates leading-edge standards and processor feature compatibility. Solaris also includes innovations that Apple found worthy of borrowing for Leopard, specifically DTrace, the real-time performance profiling facility, and ZFS, which exceeds the loftiest dreams of administrators who want a fast, simple, and bulletproof dynamic file system. The extent to which Leopard will implement ZFS is unknown, but if you meet ZFS on Solaris, you quickly understand why Apple is so impressed. The Solaris user, administrator and developer communities are boundless and positively stuffed with everything you ever wanted to know. When you go to Google looking for OS X system-level enlightenment that's not in Apple's documentation, you'll often get shunted off to blogs and half-helpful, albeit well-meaning, user-written texts. Google is a reliable index to Solaris documentation. Even the fuzziest search yields definitive results, and most of those point either to Sun or a major university, both of which reliably hit the nail on the head. As for virtualisation, it's baked into Solaris, and its specialty is self-virtualisation using a facility called Containers. Like other unique Solaris system facilities, Containers requires some reading, but Sun has done a remarkable job at producing two-to-four-page PDFs that walk you through setup of complex and unfamiliar facilities. I'm just feeling my way around Solaris after an absence of several years, and so far, I really like what Sun's done with the place since my last visit. Solaris is rough hewn compared to OS X, and the catalogue of things that Solaris won't do for OS X users is at least as long as its strengths, with simplicity and usability topping the list. But Solaris is free in all relevant definitions of the word (I recommend Solaris 10 Express Developer if you want to see all that Solaris can be), and its interoperability with OS X is seamless. After all, Solaris and OS X are both Unix, and if that's not enough, know that PowerBook, MacBook, and MacBook Pro are practically de facto choices among Sun's engineers. There's clearly some clue in the halls in Menlo Park. As I get more acquainted with that, I'll clue you in.

[]