Stories by Tom Yager

AMD notebook shows signs of rivalling Apple

At the logic level, MacBook, the benchmark for success in mainstream notebooks, is unremarkable — indistinguishable from every PC notebook built on Intel Core 2 and its chipset-integrated graphics. Why, then, can't anyone with the same parts list emulate Apple's growth in an otherwise stagnant notebook market? Because Apple painstakingly hand-optimised its OS for a tiny variety of hardware architectures, presently Intel Core 2, while Microsoft wrote Vista to run on absolutely everything. No PC notebook maker can take the proprietary route that Apple plays to such advantage.
Microsoft can't crank out proprietary cuts of Vista for each notebook vendor's choice of suppliers. The best hope is a hardware architecture that's optimised for Vista. Not only that, but optimised for 64-bit Vista running on a battery. That radical objective drove AMD's design for the total notebook platform nicknamed Puma, and now dubbed, temporarily I hope, AMD's Next Generation Notebook Platform. This platform's Turion X2 Ultra 64 CPU is not cut from the common cloth of adapted desktop platforms like Core 2 that rely on machinations of the OS to balance performance with battery life. The combination of Turion X2 Ultra 64, AMD/ATI scalable graphics technology, AMD's M780G bus interface, and SB700 South Bridge, all connected via AMD's Hypertransport 3 bus, are core to AMD's recipe for consumer, business and high-end notebooks. OEMs have just one number to call for platform parts. AMD doesn't make wi-fi, so it set up close partnerships with Broadcom and others to add 802.11n wireless to an integrated supply chain.
To a notebook OEM, a standardised bill of materials that covers a whole product line is a dream come true. That explains why global and US first-tier notebook vendors including Acer, Fujitsu, NEC and Toshiba put Turion X2 Ultra 64 models on the street on AMD's June 4 launch day. There are other notable names on AMD's list of notebook wins, but for reasons that one needn't strain to understand, they're not rocking the boat with a big fuss on AMD Notebook Platform Day.
Deadlines demand that I give the specifics of AMD's new notebook platform shorter shrift than I'd like. Apart from the freshly baked, notebook-specific CPU, the platform's defining aspect is graphics. ATI's continuum of graphics solutions for this platform provides OEMs with the ability to build systems with integrated, hybrid (integrated plus discrete), and discrete GPUs without major redesign. I am no fan of integrated graphics, but AMD is mighty proud of its integrated performance relative to Intel's. I have no trouble imagining that, since a box of crayons and a pane of glass can outperform Intel-integrated graphics. Maybe that's how Apple does it ...
AMD's Hybrid Graphics design allows system designers to add a discrete ATI GPU to the motherboard, with the advantage that the integrated and discrete GPUs will work together in one notebook to balance performance with battery life. Notebooks with discrete-only GPU designs will run graphics solely from dedicated video RAM, leaving main memory alone and boosting total system performance substantially.
AMD made it so easy for OEMs to choose from among these options that notebook buyers will see models with performance/weight/battery life balances that weren't possible before. For the past two years, a desire for that range of choice in one vendor's notebook product line has sent buyers to Apple. If a vendor standardises on AMD's Next Generation Notebook Platform, AMD's technology raises performance levels at all price points beyond what Intel can deliver. The time I've spent with Puma's engineers leaves me comfortable with that assertion.
How far AMD carries notebook technology past Intel's unimpressive status quo is a question that needs answering. As soon as I land a notebook for review, I'll quantify the platform's advantages for you, and dig into that tough metric that I consider so vital: battery life with a discrete GPU. I'm desperate to be impressed by a notebook that doesn't bear an Apple logo. I have a strong feeling that when that glorious day comes, AMD will have made it happen.

There's no turning back from a Mac

I've been relating the story of a professional colleague who, some months ago and under semi-voluntary circumstances, made the switch from Windows to the Mac. Her twisted arm now nicely healed, she has not only switched, she has an unshakable conviction that even the fastest, newest PC would be an embarrassing hand-me-down next to a mature Mac. If I were to swap her early model MacBook for a quad-core PC desktop, she'd accept it with the graciousness one brings to the gift of a fruitcake (or one from a fruitcake), and then covertly scan eBay for a PowerPC Mac. It is not the particular machine or its performance to which she has become attached; indeed, the hardware is, to her, invisible. The Mac platform is home to her now, not out of religious devotion or some wish not to disappoint me, but because it clicks with both halves of her brain in a way that Windows cannot.
I've held forth with her on this subject, namely how creativity and logic get equal attention from Mac developers because Apple's development tools, code samples, documentation and style guide naturally produce applications that are right brain/left brain balanced. Mac developers' first published efforts often bear an apologetic "this is my first time... don't hate me" in their accompanying README file, and yet they exhibit a degree of usability and consistency that Windows and X Window System developers can't afford to invest. When you're coding for a Mac, form and function progress hand in hand without special effort.
When I treat my colleague to theses such as this that are outside her realm of interest, her advice, borrowed from a film, is "write it, dear boy". One cannot be a friend to me and a stranger to patience.
The vessel that carried her from Windows to the Mac platform was an early Core 2 Duo MacBook, a fit little notebook that I chose for two reasons. I figured that she'd want a Mac that she could take with her when she travels. I was also mindful of keeping Apple's investment in this project to a minimum. Although it nets me the best observational research for which a writer could hope, and it is further enabling my efforts to adapt technology to the changing needs forced on users by the deterioration of their vision, it benefits Apple nothing.
We worked together to fashion MacBook into a functional desktop. It took an old Lexan-encased 20-inch Apple Cinema Display, a trio of Lego pedestals with double-sided tape to raise it to the proper height, and a small, battery-operated fluorescent lamp fixed beneath the display to gently illuminate the keyboard. This weird-looking arrangement works surprisingly well, but the MacBook can only wedge in with its lid closed, and it has to be turned to one side to make room to insert or eject a CD or DVD. This is what you or I might consider extraordinary effort to derive a barely acceptable result, but she's so much in the Mac that she's never expressed anything but delight in her use of what we've put together, be it ever so kludgy.
Even with all of this, she has to lean in to see her work at her desk because she cannot rotate or tilt her display. She cannot hope to use a notebook computer for longer than the briefest periods because to raise its display to a workable height would necessitate the use of a separate keyboard and pointing device (defeating the purpose). While its user hadn't the merest desire to replace or upgrade it, I resolved to cut the MacBook loose so that it would be free to travel as it is designed to do.
I expressed to Apple my desire to carry my research to a new level by bringing in an iMac all-in-one desktop for my subject's use. To my surprise, Apple agreed, at least for a time. The 24-inch iMac has arrived, and my colleague, who shares my lack of susceptibility to anything new for its own sake, is not keen to have it on her desk. In her experience, moving from one computer to a newer one leeches productivity while leaving her no better off than before. My long experience with PCs and Unix servers and workstations leaves me in total agreement. My experience with replacing a Mac for a new one is something I'm not sharing with her.
I've told her that the MacBook is going back next Monday. Its shipping box sits next to my subject's desk as a reminder. I gave her an external hard drive and told her to make ready for the move by copying everything that matters to the external drive, burning the stuff she really can't afford to lose to DVD, and gathering all of the installation media and registration keys for her software. It's standard operating procedure for a PC swap, a routine that all sensible people put off for as long as possible.
Imagine how pleased she'll be when I tell her that Apple insists on having the MacBook back this Thursday rather than next Monday, and by the way, I'm leaving town and I won't be able to help her set up her new machine. Apple is making no such demand, but there is much to be learned from observing subjects' reactions to unexpected challenges.

AMD's roadmap burns through Intel's fog

Intel CEO Paul Otellini's memorable "shame on us... mea culpa, we screwed up" March 2007 speech to Morgan Stanley investors came after his company's marketing fog machine could no longer conceal the truth that, depending on your point of view, Intel was peddling technology that it knew to be somewhere between four and eight years behind AMD's. AMD told you so, and so did I, but Intel's marketing is capable of overpowering reason. Intel manages to thrive by setting expectations that match its technology, and raising those expectations every two years by just enough to make you see your Intel-based PC or server as wanting. Otellini got stuck apologising because AMD got a chance to show buyers Opteron's potential. The market's expectations followed, as they naturally will when people buy technology that never needs replacing. Given the choice between buying well and buying often, the market chose the former.
Intel's smokescreen is back in overdrive. Those who do a light amount of homework before buying are getting Intel's same old message: Higher clock speeds, bigger cache, manufacturing process shrink and faster front-side bus make the world go 'round. That latest speed bump makes your one-year-old computer look pretty sad (on paper). And when Intel goes "tock", it's rip and replace time to get those extra cores and the broader bus. Intel put the cherry on top by getting everyone worked up over CPU power draw to the exclusion of total system power draw. Intel sets the market's agenda. It tells buyers what matters.
AMD designs technology that will enable the workloads that you'll be running in two or three years. It strikes many as improbable when I tell them that AMD-based hardware, servers in particular, get faster over time as operating systems and application developers start unlocking the potential of the platform. When I say this, I may not take enough care to point out that AMD is committed to raising that potential between major revisions of its CPUs and whole system platforms. Intel can't catch up because AMD presents a moving target with meaningful point enhancements between major architecture revisions. AMD ticks and tocks as well, but it's the market that swings AMD's pendulum.
AMD is getting bolder about letting the market, in this case, IT, know that even through the gathering fog, AMD has a clear picture of what matters most to system buyers. You don't hear much about it these days, but price/performance matters. AMD's record-making results with Quad-Core Opteron on SPECweb2005 sets a realistic bar for server performance, a record that, notably, Intel misses by a hair. But Quad-Core Opteron comes in 41% lower in cost than quad-core Intel Xeon in two-socket servers. AMD servers cost less to build. Whether these savings will be passed on to you as a lower total system price is up to the OEM and its tendency to maintain artificial price parity between its similar Intel and AMD offerings. Not that I'm suggesting there's any pressure to do that.
AMD's not handing performance per watt to Intel. AMD's published benchmarks show quad-core AMD systems skunking Intel Core 2 Xeon on floating point synthetic benchmarks, by margins of 13%-50%, but quad-core Opteron lags Core 2 Xeon's integer performance by an impressive margin. Intel's butt-kicking compiler scored quad-core Xeon an earnest 20% lead over Quad-Core Opteron on SPECint_rate2006 (peak). The best AMD-targeted compiler from Portland Group couldn't close the gap. But interestingly, when the playing field was levelled a bit by using the gcc open source compilers, AMD pulled to within 9% of Core 2 Xeon on SPECint_rate2006 (base). The likelihood that you'll encounter architecture-optimised applications in the wild is mighty slim, but AMD gets candour points for showing this shortcoming of its own making. If AMD cares about closing the integer benchmark gap, AMD needs to contribute benchmark-winning optimisations to GNU.
AMD counters the gearhead-level speeds and feeds derived from synthetic benchmarks with IT-relevant load metrics. In web transactions, virtualisation and parallel workloads, Quad-Core Opteron outperforms quad-core Xeon by margins of 9%-16%. But there's a point worth noting: AMD scored these wins with a 2.3GHz CPU and 2MB of Level 3 cache. Intel lost out to AMD with quad-core Xeon CPUs running at 2.83GHz with 12MB of cache. The configuration differences between the two architectures give AMD what you call headroom. AMD is holding manufacturing process shrink, CPU clock speed, bigger cache and additional cores as cards to play on IT's behalf when the time is right.
The time is right. Later this year, AMD will roll out "Shanghai", a Quad-Core Opteron built on a new 45 nanometer process, matching Intel's in scale while using a simpler method. Shanghai raises the ceiling on CPU clock speed to a level that AMD didn't disclose, and lowers power at idle by 20%. That's a ridiculous metric for a two-socket server, but in an eight-socket server, the likelihood that a socket will be idle is higher. AMD surprised me by borrowing a page from Intel's playbook, doubling its Level 3 CPU cache to 6MB. That will make a serious difference in the performance of applications optimised for Intel CPUs.
I was particularly struck by AMD's claim that Shanghai would deliver 25% faster times for world switch (switching from one guest OS instance to another) than the present Quad-Core Opteron. This, combined with a 10% boost in memory bandwidth, will give AMD a leg up in virtualisation.
Shanghai marks the server debut of the coherent HyperTransport 3 (cHT3) bus. cHT3 is faster and more scalable than the HyperTransport 1 bus implemented in present Quad-Core Opteron servers, which probably contributes to reduced world switch time and increased memory bandwidth, both measures that are sensitive to the speed of the interconnects among CPUs.
The Shanghai CPU, which AMD projects will be available this year, will be a drop-in replacement for Quad-Core Opteron. Given where the economy is likely to be when Shanghai shows up, chip swap-upgradeable servers are a really smart investment.
I've saved the best part for last. If AMD hewed to Intel's "tick tock" strategy, which dictates a substantial architecture revision (tock) every other year, then with Shanghai, 2008 will certainly go down as an AMD "tock". In 2009, an architecture revision code-named "Istanbul" will carry AMD's 45 nanometer Opteron to six cores. In 2010, AMD will knock Intel's tocks off: A 12-core large scale enterprise CPU named "Magny-Cours" is slated for the first half of that year, and will deliver on AMD's big iron availability and reliability strategy. A six-core edition of this CPU, "Sao Paolo", will roll out at the same time.
AMD is always cautious about projecting too far ahead, fearing that system buyers might suspect Intel-like obsolescence by design. AMD is smart to come out swinging by laying out present and future technology despite the risk, but amid the excitement over architectures to come, the value of AMD's long-term commitment to buyers can't be set aside. The Quad-Core Opteron server you buy today will upgrade to six-core, and even when 2010 comes around, 2008's Quad-Core Opteron servers will remain state of the art relative to Intel. New systems based on that platform will still be sold, and parts will remain plentiful. Isn't it nice to see clearly again?

It's quiet, it's the rack of my dreams

When I refer to my lab, I use the term loosely. It's a 10-by-10-foot working space whose smooth walls channel the sound from every device with a fan straight into my ears. I share that room with every server I use and test. Of these, an 8-core Xserve is the only box that stays on 24/7, and I wish I could say I've gotten used to the noise. I haven't. While the Xserve idles at a pleasant noise level, as soon as any computing load kicks in, the fans spin up. When they do, they find a frequency resonant with the part of my brain that tells me that if I value what's left of my hearing, it's time to leave the room. The necessity of working with rack servers that get louder with each generation has made noise the primary governor of my workflow.
Of rack servers, Xserve is relatively quiet. Apple's design favors ergonomics, but this Xserve is configured with 8GB of RAM. For contrast, consider the four-socket, 16-core, 32GB 1U Barcelona rack server that AMD recently shipped to me. At idle, that machine is as loud as Xserve is at full tilt. My 16-core Xeon rack server is no better. I honestly can't live with them. I was ready to stick them in my garage, sucking wind from a portable air conditioner. The combined racket would be intolerable.
There are three things that I set out to save: My ears and my power draw. To rescue my hearing, I shopped endlessly for noise reduction solutions, from sound-absorbing pads that stick to the wall to refrigerated racks that are, more or less, refrigerators. Sound-absorbing this and noise-scattering that, when they're pitched as solutions meant to work outside the rack enclosure, are glorified packing foam. The cost of cooled enterprise racks is so outrageously high that an employer would have to judge the expense greater than the value of one's hearing. But even those enclosures that seal for self-cooling are built for non-cooled and outdoor environments, neither of which is my problem.
That long search brought me around to a company I've known about for a long time, but didn't associate with solutions suitable for enterprise use. After a long and edifying discussion, GizMac, a company that really needs to work on its name, agreed to send me an XRackPro2 sealed rack enclosure. GizMac was careful to set my expectations. XRackPro2 is not, the company warned, a noise-isolating cabinet. It reduces noise, I've learned, with varying effectiveness depending on the type and amount of fan noise generated inside the rack. But I'll tell you this: I packed an 8-core Xserve and two 16-core machines in a 6U XRackPro2. When I powered them all up, the noise was so overwhelming as to make a telephone call impossible from anywhere in the room. Until, that is, I shut XRackPro2's foam-sealed front and back doors. I sat there opening and closing the doors for quite a while, marveling at the difference in noise levels. I also discovered that the forced airflow through XRackPro2, with a filtered intake underneath the enclosure, where the cool air is, and a pair of huge AC fans mounted to the rear door, made the server fans spin considerably slower, further helping to control noise. GizMac chose the fans for the rear of XRackPro2's cabinet well. They are barely noticeable.
XRackPro2 makes a jaw-dropping difference in rack servers' noise level, but by itself it isn't enough. The 6U XRackPro2 renders an 8-core Xserve silent. Even in the XRackPro2, the noise from three servers churning under high workloads falls from painful to safe, but in a 10-by-10 space, noise-reducing headphones are still occasional companions.
There is, however, an unwelcome contributor to server noise — specifically, the noise that servers generate when they're shut down. A shut-down server's power supply fans keep spinning to supply power to subsystems that are always operating, like LAN interfaces and system management controllers. Why the fans have to spin so fast to supply so little power is a mystery to me. In any case, I consider the maximum acceptable power draw for an unused server to be zero watts, and the maximum noise level to be complete silence. This is achieved by yanking the AC plug from the wall, something that few servers can do for themselves. But there is a way to do it.
Ages ago, DataProbe sent me an iBootBar rack power monitor/controller for evaluation. It sat idle for want of a rack to call home, but as soon as the XRackPro2 arrived, the iBootBar was the first device in the enclosure. The configuration of iBootBar that I use controls the power to eight outlets, singly or in user-defined groups. You give each outlet or group a name, and then you can control the outlets or list their power status using any of iBootBar's included Telnet, web, serial and modem management interfaces, all of which are constantly and simultaneously active. iBootBar can cycle power to force a server to do a full reset and perform user-defined power sequencing. It can also monitor network devices with auto-ping and cycle their power if they fail to respond, acting as an external watchdog.
The more I work with iBootBar, the more applications I find for it. For one, iBootBar measures the power draw, in Amperes, for each group of four outlets. This allows for relatively precise remote power monitoring, which includes notification thresholds if the power falls below or exceeds a set level. iBootBar handles physical security by allowing me to cut power to the KVM switch, which is locked inside the XRackPro2 cabinet. I can fail-over the Xserve to one of the 16-core servers with neither machine's involvement, and if I'm under attack, as I was recently, I can kill the internet router and work safely via the LAN.
I haven't hit perfection yet. There are a couple of things that iBootBar doesn't do that would raise its usefulness to a higher level. One would be to make iBootBar able to issue LAN wakeup packets ("magic" packets) to devices that are not set to turn on when AC power is restored. Another would be basic scripting. But these are small issues, considering how much noise and wasted power I was dealing with before the combination of XRackPro2 and iBootBar entered the picture. Now my ears can be where my servers are, and when we're apart, iBootBar gives me remote monitoring and control of the entire rack.

A timely proposal for IT energy conservation

I take it as a point of pride that InfoWorld is uniquely outspoken on matters of green computing, and that my dedication to that cause predates InfoWorld's. When I write about it, I stay away from suggesting that there is any sort of "save the planet" global imperative that should drive IT towards consolidation and the purchase of more energy-efficient equipment. By pursuing energy-conscious policies, what a company conserves is capital. As energy prices inevitably rise, kilowatts, BTUs, and square feet rise above everyday concerns. I reach out to the pragmatists with the message that conservation makes fiscal sense. I don't mind sneaking my agenda through the side door.
Journalists must always be aware of the cost of taking sides on a controversial issue, ever conscious of the fact that doing so alienates a portion of one's audience. I can't write for those who see conservation as a purely fiscal issue, because that encourages offsetting the impact of a lazy approach to energy conservation with reductions in head count, withdrawal of community projects, raising the prices of products and services, or targeting a narrower, more elite market that can afford to underwrite energy waste as a cost of doing business.
Frankly, the trouble with dressing my energy agenda in a suit of pragmatism is that it isn't getting the message across. While I'm preaching consolidation as a means of reducing energy costs, too much of IT, and too many of the server vendors who supply it, use that call to justify overconfiguration. A four-socket, 16-core rack server with a pair of 1-kilowatt power supplies can do the job of two eight-core servers with a pair of 700-watt power supplies. The trouble is, those eight-core servers are only a year old. Buying a bigger, hotter server isn't about consolidation; those eight-core servers aren't taken off-line. The new server is added to the pool of virtual machines that are ready for duty at a millisecond's notice. We may ride into the purchase of higher-density servers under the flag of consolidation, but we typically skip the second half of that exercise that involves turning off the machines we intended to replace.
It's not helping.
It's tempting to push conservation off on the big, heartless companies that are least likely to do it, but like all social change, this one needs a grassroots kick, and I have an idea. I call it the "green delay". Customers that hit your website, along with internal users that poll for email every 10 to 30 seconds, demand near-instantaneous responses to their requests. When a website takes more than seven seconds to be ready for interaction, it's said that a great many users will go elsewhere rather than wait the few additional seconds it might take to put up clickable buttons. Sure, time is money, but when did we start calculating the value of our time in increments of seconds or even milliseconds?
I think it's time to consider the flip side of the time/money equation: Time is carbon. Our impatience with technology isn't about money, and on the scale of seconds or milliseconds, is not justifiable as business necessity or competitive advantage. Whoever I am, whoever you are, we are the people who need our data right away. We need file shares to mount instantly. Our SQL query tables must populate instantly so that we can start scrolling through 500 results a few seconds sooner. Let someone else who doesn't pay as much for a particular service or technology do the waiting. The cell network is plugged up with all those consumer voice calls, and my IM and BlackBerry messages, and my mobile browser sessions, aren't going through fast enough to suit me.
I realise that for a guilt trip to work, it needs to be more concrete and specific. I have a specific proposal, a painless one that actually addresses two problems at once. Outside first-shift business hours, email servers don't need to offer a quick response. Email protocols were designed to accommodate multi-hop modem links between sites and batch transfers scheduled several hours apart. In other words, the worldwide network of email servers is already optimised for green operation. Let's take advantage of that. Instead of treating email like instant messages when recipients can't possibly read them, let there be latency.
Figure out how few physical servers need to be online outside business hours to ensure that messages from other time zones are received by the start of the next business day. It's OK for email connections to time out because fewer servers are on a hair trigger to answer SMTP connection requests. Email transfer protocols require that servers keep attempting delivery, not for seconds or hours, but for days. The beauty is, your users won't notice. You can crank email capacity up again when your people are back on the job.
If you make email wait, you gain another benefit. Spammers have zero patience. When a chunk of spam doesn't get to you on the first try, it moves on to a more eager target. Email-enabled attacks, like dictionary scans of user names, require fast response to each attempt. Wouldn't it be great if, by IT adopting a more ecologically sound approach, more spammers slammed into our drawbridges?
There I go again, presenting a pragmatic angle on social responsibility. I can't make this one too easy. The important part of any IT conservation effort is to set the goal of powering servers down as often as, and for as long as, possible. It's nice that a server with a 1,000-watt power supply can idle at 300 watts. A server should idle at zero watts, and it can. Modern servers will power down and back up on preset schedules. Intelligent power controllers, like iBootBar from DataProbe, can control system power via script, telnet, modem, and browser interfaces so that servers that need help can power up other servers on the LAN.
The point is, every second spent waiting for a message or a website can make a difference. Time really is valuable. We can't take for granted that we have an unlimited supply of it regardless of our actions.

.Mac and Live Mesh show promise

Apple's .Mac comes close to offering professionals secure shared data and remote desktop access without the hassle of VPN. Microsoft Live Mesh hopes to take it all the way.
Old-schoolers will tell you that there are only two places your important data should live: on your meticulously secured network behind a paranoid firewall, or with data protection and storage firm Iron Mountain. Having data living exclusively within your domain presents thorny operational problems when two or more people need to get at it. If you want to selectively share files with temporary staff, business partners, external software testers, or employees who are on the road, you've got to find a way to publish it with a combination of easy access and tight security.
If you've shared business data that can't easily be placed in a shared Exchange folder by putting it in a password-protected zip file and stuffing it in your Yahoo! Briefcase or its like, you'd hardly be the first. Nor would you be the first to stay on the phone with that remote user until they verified receipt of the file so that you could delete it immediately. You're wise to assume that data hosted on free, public, consumer online services will prove inaccessible, will transfer to its broadband-endowed recipients at modem speed, or fall into the wrong hands.
While it makes IT departments break out in hives, professional users also need remote access to their desktops. Whether it's to run applications that are locked to that machine by licence, or to make a quick Saturday check on a time-consuming task, or to pull out files that are wisely (or unintentionally) not publicly shared, there are some things that can only be accomplished at the desks at which professionals spend so little of their time. It is a truly dicey matter when an employee works at home. When they're travelling, or, ironically, in the office for meetings or such, they routinely turn their desktops into servers that stand naked on residential DSL and cable modem networks.
If you think you can impose security requirements on these users, you're dreaming. Users will always take the path of most convenience, and where users' remote access is concerned, IT can't possibly craft a more convenient solution than the forwarding of file sharing and VNC ports through their home or branch office routers.
VPN is the prevailing standard for safety, but that's effective only for services that live behind your firewall. It's wholly impractical, and sometimes difficult and unwise, for off-site users, contractors and branch offices to VPN into your corporate LAN to share data. And if you have charted a course by which workers at hotels can use your corporate VPN to connect to desktops in their home offices, you've got too much time on your hands.
Apple's .Mac service has the makings of an interesting solution to the desktops-as-servers conundrum. It sets up a virtual volume, called an iDisk, that appears as a desktop icon on Windows and Mac clients. The iDisk client that's launched when you click on the desktop icon is a convenience. iDisk uses WebDAV, a secure and mature, if sluggish, standard for access to remote file hierarchies. It's a capital notion, because any changes to files are immediately visible to all users subscribed to a given iDisk, and the iDisk client lets users use Windows' Explorer or OS X's Finder to move files around, as though the iDisk were a local disk. iDisk also automatically synchronises remote files to a local folder, so that when you open your iDisk while you're offline, you can still access your files. When you're back on the Net, changes you've made are shipped to your remote iDisk and visible to other authorised users.
iDisk is clever and simple, but it shows both its age and its consumer-targeted nature. As I said, it's slow, owing to SSL encryption and HTTP's unsuitability to chatty protocols. Although changes to an iDisk are visible to all online users, there is no notification scheme to alert users that a shared volume's contents have changed and nothing like file versioning to prevent changes submitted by multiple users from overwriting each other. The 10GB storage pool that comes with .Mac, which is expandable for a fee, is roomy enough, but Apple subjects all users to limits that have been imposed to guard against the whims of adolescents. There is a monthly transfer limit of 100GB, but if you use 50GB of that in the first two weeks of a month, Apple shuts down your account. My suggestion to Apple is that transfers among .Mac users should be unlimited. It would help distinguish .Mac's service from Gmail and flaky free personal file hosting services, and it would make it worthwhile for companies to buy .Mac accounts for their users.
Although iDisk needs some renovation, Apple has added a thoroughly modern touch to .Mac's suite of services. Back to My Mac uses .Mac as a remote desktop access gateway for Mac clients, eliminating that other justification for turning home office desktops into vulnerable servers. It uses .Mac to transparently tunnel through firewalls, even those odious hotel and conference center gateways, and to pierce the veil of dynamically assigned IPs, to put your desktop's display, keyboard, and mouse at your command. There are lots of specialised services that do the same thing, but Back to My Mac is blissfully simple, not least because it is a standard feature of OS X Leopard. For any Mac user, Back to My Mac is just there, and to me at least, it is pretty plainly aimed at professional users.
Without changes to iDisk, .Mac falls short of requirements for commercial use, and Back to My Mac is of no use if you really need Back to My Vista, or that decrepit XP thing. Microsoft is floating a closed trial of Live Mesh, which, on paper at least, looks like .Mac for the 21st century. When it goes live — timing and cost are not mentioned — Live Mesh could render specialised file transfer, folder sync and remote desktop access services obsolete. I like seeing specialised anything go obsolete. I say that Live Mesh couldobsolete these things. A lot depends on how Microsoft packages it.

Measuring server power-use is a minefield

Since it was Earth Day recently, let us examine the criteria that IT brings to its purchases and arrange to make power efficiency a top priority. All it takes is the will, a little homework, and the embrace of the delusion that there's any fit and fair way to compare the power consumption of two similar pieces of equipment.
Last December 27, SPEC (the Standards Performance Evaluation Corporation) announced the availability of its SPECpower benchmark. One would think that having the heaviest of the benchmarking heavyweights pour concrete on that most slippery of metrics would give us something to go by. InfoWorld has been waiting for its copy of SPECpower, which SPEC acknowledges is a first step, since January, and I have a guess as to the reason we're still waiting. Perhaps some members of SPEC, which is primarily a consortium of vendors, have encountered the same issue that I have: Power measurement is only 5% process. It is 110% policy. The total exceeding 100 is appropriate, because neither SPEC nor anyone else can ever call the book on fairness policy for power benchmarking closed.
It shows wisdom on SPEC's part that it refers to SPECpower as a first step. But I think that SPEC started development of SPECpower with the wrong objective in mind, that being to derive a result (a figure of merit in SPEC's words) that tries to pour cement over that elusive marketing metric I hold in lowest esteem: performance per watt.
I am pleased that SPEC has made an effort to quantify "a performance". It is essential to have a meaningful constant for accomplishment so that a formula containing watts, which is a fair and concrete measure of effort invested, approaches an expression of efficiency. SPEC's formula counts the number of cycles through a Java server workload over a period of time, while the power draw during the same period is charted. The figure of merit, ssj_ops/watt, really isn't bad as distilled metrics go. SPEC doesn't deal in squishy numbers; there's no "13 SPECfoo_marks". So ssj_ops/watt is pretty good, until you try to use it to compare two servers.
Aside from fairness, there are some technical shortcomings in SPECpower, the granddaddy of which is reproducibility. If InfoWorld's Test Centre attempted to validate the SPECpower rating published by a vendor, which is the sort of thing the Test Centre likes to do, there is no chance that we'd derive a matching result. If our findings put the vendor in a better light than its own, they'd be overjoyed. If, however, we showed after much diligence that the vendor's published SPECpower results appear to be overstated, every vendor would seek to tear apart our testing methodology and policies, and the list I've compiled of foreseeable vendor objections is daunting. Our Test Centre will do a better job of levelling the playing field across vendors than vendors can (and want to) do themselves, because we can replace environmental variables that will vary across vendor testing facilities with absolutes.
Those variables are doozies. SPEC requires disclosure of everything from system configuration to compiler flags in published results, but the impact of variations in compiler flags and clock speed of memory baffles buyers. Fortunately, we can make sense of these and translate them into buying advice. But with power, some variables that will appear to be satisfied through disclosure will actually be mercurial.
The example I'll offer is temperature. SPEC requires the use of a temperature probe. You can establish a policy of pegging all tests at an ambient temperature of, say, 24 degrees, but your 24 isn't the same as mine. Try it yourself. Take a simple infrared thermometer and walk around your datacentre. Grab a baseline by pointing the thermometer at a piece of cardboard held at arm's length to get a basic ambient temperature. Then aim the thermometer at various surfaces, of varying materials and at varying heights, and varying distances from airflow. Compare the temperature of a server's top plate inside a closed rack to that of a similar server inside an open rack. Does a server that shares a rack with a storage array measure the same temperature as the server alone?
Temperature affects every server in a different way. The design of a server's cooling system and its programmed responses to high temperatures say much, if not everything, about a server's quality of design. For example, the cooling system in a cheaply made server will have one purpose, that being to suck air from the front of the chassis, and possibly the sides, and blast it out the back. The fans themselves, being electrified copper coils, generate heat, and being kept spinning at several thousand revolutions per minute in open, probably particle-laden air, makes them subject to failure. Most servers don't care how much heat it makes or where it goes, be it into the air or indirectly into the intake of the server above it, influencing its efficiency. But you have to worry about heat because you pay to move that heat outside the building, accounting for a large percentage of your operating cost. Perhaps server efficiency has to take into account its contribution to the duty cycle of the compressors in your air conditioning. Try to measure that.
My point is, don't expect easy answers to power efficiency. InfoWorld's Test Centre is taking on power testing and considering SPECpower as part of that plan. In the meantime, I can tell you the secret to avoiding the homework on this one, and it is my constant advice: Count the number of active server power supplies in your shop and commit to reducing that number over time. That'll do nicely for now.

Back to the Mac, to my true passion

Several months ago, I determined that my years-long fondness for Macs required re-examination. I quietly took a break from the Mac to get some perspective, to check out Vista, AMD and Longhorn (Windows Server 2008) untainted by Apple's PR and uninfluenced by other journalists and bloggers. I elected to take a break from reviews of new Mac hardware, the occasion of which always piques my interest in Apple's platform. There were times when I felt I'd chosen the worst possible time for this hiatus. I ended up passing on MacBook Air, Time Capsule, Harpertown Mac Pro, and most painful of all, the new MacBook Pro. It was difficult seeing InfoWorld pick up reviews of these from sister publications, but I take my responsibility to readers very seriously. I can't very well counsel you on technology choices if I consider the field limited to one worthwhile player, especially when that player projects the image that it competes only with the generation of systems that preceded what's presently sold.
I found enormous value in my time away from Mac. I made the kind of discoveries I used to make routinely before I took on the Mac as a specialty, and as I take up the Mac again — which I am doing immediately — it's clear that my appreciation for the platform is justified, and that the customary split of my effort and attention between Apple and AMD is justified.
The genuine, practical superiority of AMD's Barcelona server platform, and its Phenom desktop platforms that derived from Barcelona, came to light during the break I took from Mac. A one-socket, quad-core Spider (Phenom plus ATI CrossFire graphics) runs Vista so obscenely fast that even a die-hard Mac user's head will turn. Privately, of course.
I found it extremely intriguing that systems built on Phenom platforms can tune themselves autonomously for the maximum possible CPU and GPU speed over a surprisingly broad range, based on a whole system approach that takes cooling, power supply capacity, and your preferences for noise and maximum power consumption into account. I found that I could speed bump an AMD Phenom desktop for free by moving it closer to the floor, where the cooler air prevails. What a grand idea that in itself shows genuine customer-focused insight.
I gained a fresh appreciation for the GNU compiler collection, which has taken remarkable strides since I last took a deep dive in it. I was unaware of the level of engagement from commercial partners, including Apple, AMD and Novell. Each is undoubtedly pursuing its own agenda, but it does so within the framework and culture of one of the most tightly controlled and liberally licensed open source projects in existence. AMD has finally embarked on the long road to compiler parity with Intel with its contribution of Family 10 (Barcelona/Phenom) architecture-specific optimisations to GNU.
Apple has been busy on the gcc front as well. Objective-C 2.0, with its desperately needed garbage collection, has been a reality in the GNU toolchain since Xcode 3 was in non-disclosure beta. In release 4.2 of gcc, auto-parallelisation joins auto-vectorisation to adapt projects to multiprocessing and vector acceleration without developer intervention. Unless I'm mistaken, the public beta versions of the iPhone SDK, now at Beta 3, mark Apple's first swing at Microsoft-style free public distribution of pre-release dev tools. The privilege of early access has been reserved for paid members of Apple's Developer Connection programs. That iPhone SDK carries all of the latest GUI tools, documentation, and GNU command-line compilers, including Fortran, into Apple's default distribution. Go toApple's iPhone Dev Centre website and scroll to the bottom of the page for the download link. You do not need to pay the $99 fee to register as an iPhone developer to use the new tools, which compile applications for Leopard as well as iPhone.
Apple is getting ever more daring in its engagement with open source in other ways. WebKit, the fast HTML/CSS/SVG rendering and JavaScript engine used in Safari, has caught on like wildfire outside Apple, and why not? To get a commercial browser, loaded with current and emerging standards, free and open for incorporation in your software, is the stuff of fantasy, and Apple holds virtually nothing back. The WebKit project is not strictly Apple's. It enjoys broad community engagement, but it is worked as a priority by Apple's staff, even to the benefit of direct competitors. For example, the browser on Nokia's E-series phones is WebKit-based, and this is not the only example where Apple effectively put its staff and technology to work for the benefit of a competitor. The GNU toolchain's adaptability to multiple embedded platforms will see WebKit in everything from phones to toys, starting with iPhone and iPod Touch. Now that WebKit has been accepted into Google's Summer of Code, I can't wait to see what innovation comes from that gathering. I plan to ply the most influential attendees with the libations of their choice and get their take on where development is headed.
Apple pushed the source code for the publicly exposed innards of OS X Leopard, known as Darwin 9, out for public download on MacOS Forge. Every time it does that, I imagine the move preceded by arguments inside the office about the effort and risks that such a program visits on Apple's platform business. The work of preparing a project of Darwin's size for public distribution is inestimable, and Apple deserves credit for putting it on the agenda of its top OS engineers and project leaders.
I love the conservative approach that Apple is taking with iPhone, especially with regard to multiprocessing. iPhone applications need to launch and quit instantly, yet relaunch after the first execution, having cached and persisted their closing state in detail. It's a freeze/thaw model of state persistence that I'd like to see extended to applications in general. Apple's Xcode has Instruments (prior: xRay), a tool that jams electrodes into your program's and the system's running environment. It records and charts statistical data at runtime along several axes for later examination. It's the most effective means of hand-tuning code for efficiency that I've ever used, and it shows the benefits of persistence quite plainly.
Taking a break from Mac hardware gave me a chance to drink more deeply of the software that Apple maintains off its beaten path. MacPorts and Apple's validated versions of open source projects are open source treasure troves stuffed with some 5,000 free applications tuned and packaged for Intel and PowerPC Macs. Digging through these repositories is so addicting that I had to issue myself an edict to get back to work, which I shall do, newly confident in my mission and purpose. I'm a Macophile for good reason.

Adobe's AIR gets rich applications right

The modern browser makes an appealing client for web-based applications, but even browsers like Safari 3.1 that incorporate features of HTML 5 and CSS 3 have limitations that keep them from competing with native .Net and Java desktop applications. In those areas where a browser falls short, such as video and audio playback and local file access, the developer must resort to a plug-in that is not fully controlled by the browser script, or ugly call-outs from script to native code. Browser-based applications can't be packaged or signed for consistent and safe installation, and the "click to launch" capability that users expect from native applications can only be approximated. When you're running a browser-based app locally, there's no mistaking it for native software.
Adobe AIR is not yet widely known or implemented, but it solves all of the major issues keeping the browser from being a common front end for applications. Software written for the AIR run-time installs, launches, and feels like a native application. AIR is a WebKit-based browser, endowed not only with HTML and CSS but also SQL and Scalable Vector Graphics. AIR also incorporates Flash 9-based capabilities with a powerful, network-connected open source ActionScript interpreter. AIR gives the application ownership of the entire window, chrome and all, so that apps can look like native windows (the default) or widgets with irregular borders and transparency. If you come to AIR from the browser side, it is the ultimate standalone AJAX run-time, the smartest way to put HTML, JavaScript, XML, and CSS to use on the desktop, and you can afford to count on Flash being installed. If you look to AIR as a Flash developer, AIR is the standalone Flash 9 Player you've always wanted, with HTML and CSS fully integrated. And everybody gets SQL, which is no small thing. It's finally safe for us to admit that no matter how cleverly we manage it, XML is no substitute for a database.
It's also high time we recognised how dreadful browser plug-ins really are. Plug-ins integrate with the browser's object tree only as well as they choose to. They take over whatever drawing region you set aside for them, and what happens inside that area is entirely in the plug- in's control. One page that has a mix of HTML, Flash, and QuickTime content has three separate processes running three separate rendering paths to a single window. The burden on system resources is enormous, and I can't bring myself to imagine what adding Silverlight to the mix would do.
Can we trust Adobe to bring cross-platform HTML, CSS, XML, JavaScript, vector animation, audio, video, data persistence, packaged installation, standalone run-time, and security together in a single package, to create the next-generation desktop? Adobe's got the credentials. It has stewarded Flash so well that it transcends browser preference wars; Flash is welcome everywhere. Adobe guards that trust jealously, so it made sure that when it reached beyond Flash to create a full web-based desktop run-time, it did so with an uncommon commitment to transparency. The WebKit browser is free and fully open. Adobe gave its gem, the Flash ActionScript virtual machine, to Mozilla to create the Tamarin project, and then plugged Tamarin into AIR.
There is every reason to trust Adobe and AIR to carry internet-enabled desktop applications to the next level. As a web developer from way back, I'm excited about AIR's limitless possibilities for responsive and creative desktop apps. And the openness that Adobe invested in its solution will bring about a delightful consequence: An explosion in the worldwide library of well written, great looking applications.

AMD's ready to scale you up with Opteron

Architectural traits reaching back to Pentium remain present in the Intel-powered servers of today. The limitations of those servers aren't likely to be noticed as long as the routine of IT and commercial server buyers is to add capacity by scaling out, purchasing new two-socket servers. But the time will come when adding a rack server, or a rack of servers, is no longer the wise person's path to increased capacity. Smart planning will lead you to handle bigger workloads without more servers.
The terms "scale up" and "scale out" are sometimes unfamiliar to x86 buyers. They refer to the locale of capacity expansion, computing ("thinking") capacity in particular. A server that scales up can be made to handle substantially higher workloads through upgrades inside the chassis. These systems cost more at first, but they're designed to have untapped capabilities that you can turn on with an incremental investment far less than that of a new server.
Scale up is the factor that has kept proprietary Unix big iron in business. Linux on a commodity two-socket Intel server was supposed to push HP, IBM, and Sun out of business. It looks that way if you see a rack chassis as a rack chassis without regard for what's inside. But scale-up maximises everything from power savings and server consolidation ratio to server longevity, with the bonus of lower long-term costs and higher availability. All AMD Opteron servers scale up. It's baked into the CPU, the bus and the total system architecture. AMD's strategy is to make it possible to scale up any Opteron server for five years with only a CPU swap, no new server required. This stands in stark contrast to Intel's "tick tock" plan that attempts to nail IT to the stereotypical two-year purchasing cycle. Intel's two-year cycle of obsoleting chips makes parts scarce and expensive, so that if you do buy an Intel-based server with empty sockets with plans to scale it up, it's unlikely that CPUs precisely matching the models you have now will be available, and the availability of FB-DIMM memory at your existing Intel servers' speed may be rare as well. AMD's five-year plan is more in line with the way IBM treats, and retains, its customers.
Scale out means bigger racks, more servers, more heat, higher power and cooling costs, another tick on your service contract, another hand to hold in the middle of the night, and so on. The only thing going for it is convenience, and that's a powerful motivator. Most shops have the deployment of new rack servers down to a science, and there's rarely a need to even remove the cover on a server before you slide it into the rack. Opteron servers yield to the very same plug-and-play initial deployment, but in a few months when you'd ordinarily add a new server, you can take the scale-up route of your choice: Swap out your Opteron CPUs with higher speed or more cores, add RAM or use faster RAM, or fill empty CPU sockets with new CPUs. It really is as simple as it sounds, and when you (or your field service person) buttons up the case, you have a new server, or two, or two and a half, where your two-socket server used to be.
You have to adopt a long-term view to justify buying x86 servers that you can grow without filling more rack units, but the economy has a way of fast-forwarding reality such that the present suddenly laps the plan. If you're not already in spend-it-while-we-have-it mode, all forecasts indicate that you will be. Servers that you buy from now on should put you on course to grow your capacity, or to ready yourself for an overnight recovery, while you gently apply the brakes by reducing your costs now.
If that's too wishy-washy for you, I'll give you a hard example: A copy of Windows Server 2008 costs the same for a one-socket, four-way server as it does for an eight-socket, 32-way server. Each unit of Windows Server 2008 carries a licence that permits the operation of an unlimited number of Windows virtual machines on one physical server. Today, expanding Windows server capacity means buying more servers, and therefore more Windows licences. It may be that you have so many servers that a volume licence, as costly as it is, is cheaper or more convenient than one licence per server. Using any Opteron scale-up scenario, one Windows licence covers all the cores and virtual servers you can squeeze into one physical box. As a bonus, any variety of distributed computing is done faster on scale-up hardware because far more server-to-server communication is handled at the speed of memory rather than the speed of Ethernet.
That scenario can be carried further. When you get to know Opteron, especially the quad-core Opteron CPU nicknamed Barcelona (revision B, with the TLB flaw repaired, is now shipping), I'll explain how AMD's redesign of the x86 architecture not only scales up through added components, but scales up through evolved software as well. There are many more features in quad-core Opteron than generic x86 and x64 operating systems use. You will scale up your quad-core Opteron servers merely by installing a Windows or Linux point release that includes Opteron-specific optimisations, or changing the architectural target of the projects you compile in-house. I realise that my strong position on Opteron and desktop derivatives, like the amazing Phenom, might appear to some like bias. Please understand that when I dig into AMD CPUs and platforms as technology and foundation for IT strategy and investment, I simply see so many changes for the better.

Apple's BlackBerry offensive contains some untruths

Apple's market power derives not merely from its technology, but from its adeptness at reframing a familiar market to limit the field of competitors. In the most extreme example, Apple portrays its sole competitor as itself. The competitive messaging around MacBook Pro emphasised how it skunked PowerPC notebooks in performance. Later, Core 2 Duo MacBook Pro was sold as far superior to Core Duo MacBook Pro. Apple is 2X faster than Apple, so clearly, the smart money's on Apple.
At the press conference at which iPhone's Exchange Server connectivity and software development kit (SDK) were unveiled, Steve Jobs established and reinforced the premise that in eight months, iPhone has redefined the entire smartphone market. Windows Mobile and Symbian Series 60 are now irrelevant, leaving only two relevant players, iPhone and BlackBerry. Given that BlackBerry is old, tacky and unreliable, enterprises oughtn't waste time trying to prop it up. Out with the old, in with the new, he implied.
This mirrors the swipes that Apple used to take at Microsoft. They're always delivered with the Jobsian wink and smirk, but they are far from the offhand remarks they're packaged to be. They're very carefully targeted. In BlackBerry's case, Jobs took the opportunity to reveal some little-known information about BlackBerry — widely published, just not the kind of details that BlackBerry users care about — and portray it as a powerful disadvantage that makes the fresh technology that iPhone brings to the market a necessity. I grant that iPhone outshines BlackBerry as a platform for graphical mobile applications, with the drawback being that writing iPhone software for your personal use will cost you US$99 (NZ$122). In contrast, BlackBerry, Nokia, and Microsoft impose no charges. I think that Apple could have made more hay by showing a text-based custom BlackBerry app next to the same application done in Technicolour and full motion on iPhone. Instead, Apple focused its battle with BlackBerry on two simple points: BlackBerry handsets are ugly, and BlackBerry's network is old fashioned, insecure and unreliable.
I'll grant you, my BlackBerry 8820 is industrial in its styling. That was my choice. BlackBerry handsets are now in all sizes and colours, with the bonus that every model has matching messaging functionality. Consumers and fashion-conscious professionals have swarmed to Curve, BlackBerry's jazzy QWERTY handset, and more compact phone-like devices that have the same standard BlackBerry messaging capabilities. No BlackBerry's screen is as large as iPhone's, but iPhone's visible display space is cut considerably when the huge on-screen keyboard slides in. A BlackBerry squeezes more text onto its smaller screen, and both fonts and font sizes are adjustable to match your vision.
Every BlackBerry is operable with one hand, or if you use the in-handset voice dialing, no hands. Built-in GPS is there if you want it, with Google Maps and BlackBerry's own excellent mapping software showing you where you are and where you're going. Upgrade to the inexpensive and platform-defining TeleNav, and you'll find out why I can't leave home without its turn by turn directions called out by street name. My BlackBerry 8820's battery lasts forever compared to iPhone's. BlackBerry comes with a holster. BlackBerry handsets are available from all major US carriers, and they're subsidised. Even AT&T will amortise the cost of your BlackBerry device in return for a two year contract commitment. With iPhone, your two year contract commitment gets you list price, and you can shop around and pick any operator you like as long as it's AT&T.
Apple's favourite way to pin the grey beard on the BlackBerry is to point out that it uses indirect delivery. All messages, regardless of their origin or destination, are routed through BlackBerry's proprietary network. Every message makes a stop at Research In Motion's network operations centre in Canada (Jobs: "It's not even in this country!") before being sent to a handset or mail server. In contrast, Apple and AT&T give you a direct TCP/IP connection between an employee's iPhone and your company's Exchange Server. Jobs wonders why BlackBerry users aren't concerned about security, given that all messages are gathered on a central group of servers, a single point of failure, where unencrypted messages sit naked and vulnerable to anyone roaming around the BlackBerry NOC. Can Americans really trust those nosy Canadians with our sensitive email?
It's funny that Apple, fronting for AT&T, points to the privacy risks of shuttling communications across the border. Aren't there some hearings on Capitol Hill about warrantless something or other, and pleas for legal protection of telecommunications companies that too eagerly spilled the beans on subscribers? Security begins at home, eh?

Apple's iPhone development kit rocks

Eight months ago, Apple was a non-player in the mobile space. Now, according to Apple, iPhone is the second most popular smartphone solution after BlackBerry. With all the hoopla he raised over iPhone at launch time, it's as if Steve Jobs saw this coming.
What he admits he didn't see coming was the market's reaction to the lack of a software development kit (SDK) that would support third-party apps on iPhone. iPhone is the only smartphone platform without custom application support, and that fact locked Apple out of the fleet sales that are RIM's bread and butter. It also disenfranchised the Mac developers who put Mac on the map and keep it there with, wouldn't you know it, native applications. I have weighed in on the subject of an iPhone SDK for native software in my usual soft-spoken, dead-horse-friendly way. "Apple, don't brag that iPhone runs OS X," I said, "until developers can get at it."
Come June, Apple gets a pass to brag about its mobile OS all it likes. That's when Apple is slated to deliver its SDK for iPhone, and from the work I'm doing with the publicly available preview tools and documentation, I can attest that iPhone will be the simplest, best-documented and most enjoyable experience for mobile application developers. I have coded fairly extensively with Symbian, Windows Mobile and BlackBerry. iPhone just blows them away, making me wonder who decided that mobile development had to be difficult. I'll take that a step further: If you're new to programming, iPhone or iPod Touch is a splendid place to start.
I can't do the iPhone SDK justice in one post, but I can hit a couple of the high notes that earn Apple props for taking the SDK further than it had to. For openers, application developers don't need to use Objective-C, C or C++ to write software for iPhone. Apple added the one thing I was sure it wouldn't add — data persistence — to iPhone's Safari browser, paving the way for applications crafted in JavaScript, HTML and CSS that run even when the network is unavailable. What's more, iPhone's JavaScript persistence doesn't force programmers to deal with flat text files or XML. It uses SQL, complete with transactions. Apple also put some flash (ahem) in Safari's GUI with built-in support for Scalable Vector Graphics (SVG) and both automatic and explicit animation. Apple supplies web app code snippets that mimic iPhone's native GUI, and a web application can take over the whole screen, leaving no trace that it's running in the browser. iPhone's offline web app support is so strong that I'm looking forward to seeing it ported to Safari for the desktop.
On the native side, we now know that the iPhone OS is based on OS X 10.5, aka Leopard, and that Apple has catered to Mac developers. Their skills, and fair chunks of their code, will move readily to iPhone. In fact, there are so many similarities between the Mac and iPhone that much of learning to code for iPhone is familiarising yourself with what you can't do. For example, the same presentation facilities, such as OpenGL and Quartz, are present in desktop and iPhone OS X, but OpenGL is slimmed down to OpenGL ES (embedded systems), and Quartz is limited to 2D graphics. But use the word "limited" carefully where the iPhone SDK is concerned. Quartz may be limited to 2-D, but it can still load, display, scale, annotate, and save PDF files. Can your phone or music player do that?
Native iPhone applications have access to standard POSIX C APIs and other must-haves such as Berkeley Sockets for TCP/IP communication. All third-party code runs in a sandbox, meaning that the OS exerts tight control over its access to system calls, TCP ports, files and other resources. You can't write an application that dips into another app's files. You can't write a custom mail or Telnet server that listens on the standard TCP ports for these services, whether the iPhone OS is using those ports or not. Of course, there's no path from the sandbox to any device internals that you could use to flip the phone to a different wireless operator. The sandbox is tight enough that a hacker would have to punch through it to pillage or hobble your iPhone, and Apple has set it up so that every application can be traced back to its creator. Apple's method for registering and certifying applications will engender controversy, but users need to know that they can sample the riches of iPhone software in complete safety.
I'll leave you with two details that put iPhone way over the top for developers: namely, the multitouch display and the three-axis accelerometer. Both of these are accessible in native code as well as JavaScript. Complex multitouch gestures such as pinch, spread, sweep and circle are sent to software as events along with the basic tap and drag. To make the on-screen keyboard appear, you don't ask for it. You simply move the focus to a text field.
The accelerometer is developer candy that will break Apple into the gaming market in a way that the Mac never could. iPhone can sense orientation and movement in 3D space. As you move, or whatever is carrying your iPhone or iPod Touch moves, an application can know about it. The possibilities are endless, and there are serious uses for 3D position sensing that can't be set aside. It's an ultimately intuitive controller for complex processes that currently require operators to bypass humans' natural 3D perception in favour of 2D controls such as buttons, switches, mice and joysticks.
It's my job to dream big, but developers will come up with far more down-to-earth uses for iPhone and iPod Touch. Apple's main interest is in opening iPhone to enterprises that demand mobile devices they can customise to suit their needs. The SDK gets Apple there, and Apple's Mac-like approach stuffs the market with thousands of developers ready to code for the phone. Those enterprises in need of custom code for fleet-issued handsets don't have to look very far for talent. By year's end, there will be a glut of great software for iPhone and iPod Touch, much of which will cost nothing. And to top it off, Apple will host the entire catalogue of third-party software.

AMD's Cartwheel platform promises simplicity

If the reality of the "standardised PC" were aligned with the rhetoric, no PC would ship with a separate driver disc. Windows XP would install onto a blank hard drive in the time it takes to copy the files. There would be no Found New Hardware Wizard, and if you inherited a PC with no discs or documentation, you could be certain that a store-bought Windows Vista DVD would be the only thing you'd need to make it work.
That's the reality for every modern-era Mac. A used Mac, plus nothing but a generic copy of Leopard, is a working computer. On that Mac's first connection to the internet, all of that specific model's latest device drivers and firmware are downloaded and installed in one hands-off operation. Surely, if someone were given a chance to lay out the requirements for a PC standard from scratch, this sort of simplicity would be among them.
PC users can have computers that install from scratch with generic Vista or Windows media. If you knew that essential device drivers were on all Microsoft's install discs, and that all system drivers could be updated any time with a single download, that'd feel more like the sort of standard you'd expect. I was pleased to find that a major piece of the bridge to this future recently fell into place.
I just took delivery on a box containing a reference system for AMD's new Cartwheel (780G series) desktop platform. Inside an unnecessarily large, black, desk-side chassis was a system built around a very green (2.5GHz, 45 watt) dual-core AMD Athlon X2 4800 CPU. This system is what I now demand all desktops to be when I'm not racing them: Silent. But to my point about standard platforms: All systems built on AMD's Cartwheel, regardless of vendor, will use an identical bundle of device drivers for CPU, core logic, internal and external SATA disk controller, RAID, Ethernet, multi-display 3-D accelerated graphics (DirectX 10 compatible), DVD/Blu-Ray/HD-DVD decoding, and USB 2.0. Any system based on Cartwheel runs Vista out of the box with the drivers Microsoft put on the disc, and runs fully optimised after one trip to AMD's Web site to download the latest driver bundle.
The problem with most attempts at platforms is that they are inflexible. For example, Intel can claim that its chipsets' benefits overlap with AMD's, but Intel's chipset-integrated graphics are barely adequate for text, much less 3-D. AMD played the trump card of engineers from graphics chipmaker ATI, so that even the least of the Cartwheel desktops will still be able to play HD and Blu-Ray DVDs, along with HD content, games, and, oh yes, Vista. While Cartwheel will get this done and establish lower price points doing it, it has another advantage that Intel lacks. For those users and system makers wanting more 3-D kick from Cartwheel than the 780G integrated graphics provide, AMD offers the unique option of Hybrid Graphics: You can add an AMD/ATI discrete 3-D graphics accelerator, ranging in power and price from bargain bin to barnburner, to your system, and when running Vista, Cartwheel systems will use the combined rendering power of integrated and discrete GPUs (graphics processing units). Even with Hybrid Graphics, the platform still uses one set of drivers common to all implementations, downloadable from AMD.
The Cartwheel desktop platform will have a Puma counterpart for notebooks, extending the reach of AMD's consistent, unified PC platform to all clients. Is this certain to carry buyers of AMD 780G systems toward Mac-like simplicity? There are a couple of major bumps in that road. One is the BIOS. Each PC maker contracts out the initial and continued development of its systems' boot firmware and arranges distribution to customers. As long as a user can be cornered into having to flash his PC's BIOS to get an OS loaded, no PC can claim to be as easy to deploy as a Mac. The other limitation is audio, which is not part of the Cartwheel/Puma platform, so neither AMD nor users can predict which one of many digital audio chips their system will use. Audio drivers are often missing from Windows install discs, forcing you to find them on vendor-supplied media or on the vendor's Web site.
I can still see a day when an AMD platform-based PC will boot from a Microsoft install disc, connect to AMD.com, automatically identify and download the latest unified drivers, and come to life as a fully optimised PC, all without the user's intervention. That's as it should be, and as I've said, I think that AMD is the only outfit that could pull this off. Until then, customers who buy AMD 780G platforms from whatever system makers they choose will find that their CPU, core logic (chipset), and graphics device drivers are developed and maintained by, and downloadable from, AMD. That is a major step forward.

Microsoft opens up — just a little

I'd like to see the specifics of Microsoft's new open source interoperability initiative, but the link to the FAQ (frequently asked questions) takes me to a page that says, "We're sorry, but we were unable to service your request". I think that's the answer to my own frequently asked question: What is the open source interoperability initiative?
You shouldn't draw too many conclusions from the fact that osi.org takes you to Ontario Swine Improvement, an idea that mystifies me more than Microsoft's OSI. OSI also happens to be the initials for Open Source Initiative, the body that determines the legitimacy of homespun open source licences, of which Microsoft has two, called the Microsoft Public Licence and the Microsoft Reciprocal Licence. Both of these were filed late last year, and they are what I'd like all legal documents to be: concise. They offer royalty-free licences of software and permit redistribution of derivative works, provided that attributions are maintained. Short and sweet, yes?
There's a little catch in the language of these licences in the phrase "licensed patents", defined to be "contributor's patent claims that read directly on its contribution". The contributor is Microsoft, so this phrase says that all of that royalty-free and unfettered redistribution stuff doesn't apply unless you've licensed the applicable patents that Microsoft has attached to it. Now, if Microsoft contributes apparently open code, API (application programming interface) or protocol, that is itself derived from patented work, or that is great-grandfathered by an obscure patent on the letter "q", it gets messy, especially if any of the contributions are in uncommented object code form.
I recall a conversation I had years ago, and I swear to this, with a mildly besotted Microsofter who declared that Microsoft had a patent on the run queue, a list that keeps track of the order in which processes will run on the CPU. I asked, "Really?" and he said "abschuhoodly". For all I know, he's right. I expect that between Microsoft and Novell, everything that inventors hadn't the presence of mind to patent is now patented.
Microsoft states in its open source licences that non-commercial use of its code, APIs and protocols is okay and royalty-free. But let's say that somebody likes a date-to-string function you lifted from Microsoft's patented Exchange Server API and rolls it into their open source mail client. That client is subsequently folded into, say, OS X, and at that point, it's gone commercial. Land mine.
I don't know how this will sort out. As I see it now, I wouldn't touch code created by anyone who has come within whiffing distance of Microsoft's published code, APIs and protocols. How am I supposed to know whether someone's going to sell the code derived from my code derived from Microsoft's patented protocols? I'd only lift my quarantine if Microsoft took to tagging everything I might want to use as 100% patent-clean. Perhaps it will set up a legal department just to declare hunks of code patent-free.
It speaks in Microsoft's favour that, with the encouragement of the US Department of Justice and the European Union's Court of First Instances, Microsoft has been negotiating with Xen, JBoss, and other open source projects that turned into commercial software. Microsoft's patent arrangement lays out the usual "fair and reasonable" language with relation to patent licences. As long as vendors are free to share with us the terms of their patent licences, I'm cool with fair and reasonable. If the licence agreement requires that the terms be held confidential, that'd make me a bit squirmy.
Perhaps I'm exaggerating about the patent risk, or perhaps not, but let's keep in mind that newly open Microsoft is the same Microsoft that was SCO's primary cheerleader in its (my opinion) scheme to extort licence fees from Linux users. SCO had not proven, and never did prove, its claim that Linux contained stolen code, but Microsoft kow-towed to SCO in a letter that conveniently excoriated all competitors of Microsoft, except Sun, which also ponied up, for abusing licences and patents. It was the most disgusting press release I've ever read. So now, when Microsoft says "open", my mind immediately goes back to that chapter in Redmond's history.
I'm making with the cynicism as my way of telling you to be careful. I have absolutely no doubt that there are people inside Microsoft who believe in this programme deeply, and who want to see it succeed for the best reasons. I know I'll hear from them; I want to. I'll bring what they have to say straight to you, uncoloured by bias.

[]