Stories by Tom Yager

Smaller deployments are best for iPhones

Apple's pitch on iPhone 3G is that it's as well suited to enterprise use as a BlackBerry. The core technology is certainly there, with an ActiveSync (Microsoft Exchange Server) mail client, AJAX-capable browser, Cisco-compatible VPN, and Office and PDF mail attachment viewers. iPhone's UI revolutionised the mobile industry with scalable text and graphics, a display surface capable of responding to multifinger gestures, and an on-screen keyboard that works without a stylus.
So iPhone has the essential enterprise ingredients. The question is, does Apple's recipe fit the enterprise better than alternatives? Having worked with iPhone since last June, the honest answer is yes and no. iPhone is an unqualified hit among users. No one will complain about being migrated from whatever they're carrying now to an iPhone 3G. Employees and contractors will trample each other for a shot at an iPhone, unwittingly exposing themselves to better reachability and collaboration. For Mac users, it's practically pointless to carry anything else.
iPhone is a smart way to keep workers in touch while they're travelling because it's an unparalleled lifestyle accessory. Anyone who owns one will always have it with them, talking, texting, surfing, listening to music, and watching videos. Enterprises shouldn't brush this aside as a consideration. A mobile device is of limited use if its user can't wait to be without it.
Apple invested the bulk of its initial effort in the design and implementation of iPhone to make the device irresistible to users. Mission accomplished. Phase two made the device an easy sell to developers. I'm still waiting for phase three, which makes iPhone enterprise-friendly for configuration, equipping, deployment, and management in substantial numbers. Right now, the best I can say is that an enterprise deployment of iPhone can be done, but not as easily, flexibly, or securely as for a BlackBerry or Windows Mobile device.

iPhone trumps BlackBerry as user device

Mobile solution decisionmakers, from individual professionals to CTOs, are beginning to see the need for style to play an increasing role in device selection, and the iPhone 3G is the de facto choice.
Apple's iPhone 2.0 OS brought Cisco VPN, Exchange Server email, and native custom applications to Apple's devices, bringing utility to the mix to make the iPhone an enterprise shoo-in. On style, the iPhone is unbeatable. As a lure for prospective employees, a salve for ailing morale, or an image-setter in a business meeting, the iPhone 3G is unmatched. For some millions of buyers, that's the whole story, full stop, and there is nothing wrong with that.
You may know that I have embarked on a project to supplant deployed BlackBerry handsets with iPhone 3G devices in an enterprise scenario. I've spent the past month or so with this.
There's too much to cover in one column, so I'll reveal my findings over several weeks, with this week dedicated to the user experience of a professional switched from BlackBerry to iPhone 3G.

Think "mobile computer", not mobile phone

You don't have to be a programmer to be a mobile innovator. All you need to do is open your eyes to the fact that a smart phone or QWERTY handset is a personal computer, sans legacy baggage. In the future, user-facing computers will have more in common with the high-end mobile devices of today than with the eight-core desktops and quad-core notebooks of 2009.
We're conditioned to see mobile handsets through the lens of evolution. Car phones became cellphones, and from there took over for pagers, PIMs, PDAs, Polaroid cameras, digital voice recorders, media players, Day-Timers, the photos of nieces and nephews in our wallets, and so on. Each of these steps has taken years, mostly due to the market's narrow view of the purpose and potential of mobile devices. They're not car phones plus plus. They are not purpose-specific platforms. They are mobile computers, plain and simple.
While it's easy to imagine the iPhone as Apple TV hit with a maximum-strength shrink ray, it's harder to look at a Nokia Series 60 device without classifying it as a feature phone, smart phone, gadget-lover bait, enterprise phone, or the like. If you can make yourself think outside the box with regard to a commercial mobile device's potential, much becomes possible.
In the case of BlackBerry, Android (at present, limited to Java), iPhone 2.0, and Windows Mobile in the post-.Net age, exploration of those possibilities is limited by boundaries drawn and enforced by manufacturers and wireless operators — limitations that keep many devices stuck in the phone box. In contrast, Nokia offers developers a PC-like range of choices in tools, languages, and APIs: Locally runnable AJAX and widgets, Adobe SWF, standard C with Berkeley sockets, Java MIDP, Python, Symbian C++, and ARM assembly language. The steak comes with the requisite sizzle: Every Nokia Series 60 device is a media player, camera, PDA, mail client, internet terminal, and all the rest. But the Nokia mobile platform's real power lies beneath the top-layer apps that Nokia and your wireless operator burned into your device's default firmware.
Nokia hasn't done a very good job of exposing the Nokia platform's riches — the full-fledged computer-ness that is built into handsets — but that's changing. Forum Nokia is the developer relations and initiatives arm of the cellular handset giant, and you can tell from the moment you hit its landing page that you're not far from transforming your understanding of what the platform can do.
Nokia's dev tools choices and membership levels and tech support policies are less flexible than I'd like, but I have to give Nokia credit. It has anticipated and answered all of the show-stopping, schedule-derailing questions that developers and enterprises could have in the form of vastly improved documentation, Flash e-learning modules, and forums in which Forum Nokia engineers participate. Try it. If you have a Nokia phone, head to the Forum Nokia site, and within fifteen minutes you'll know exactly what it can and can't do, and the ideas will start to flow.
Nokia's challenge is to get people to tune in. After examining its options, Forum Nokia took the not-uncommon approach of declaring a developer contest. Nokia is putting up US$150,000 (NZ$226,000) in prize money, along with trips to the Mobile World Congress in Barcelona, for new mobile applications that compete in three categories: Eco-Challenge, Emerging Markets, and Technology Showcase. At a launch event in New York, inventor and visionary Dean Kamen fired the starting pistol for the competition that Nokia has dubbed "Calling All Innovators". If you write code, any kind of code, you need to jump on this. Mobile devices are a veritable green field for developers and forward-thinking IT managers alike. IT should acquaint itself with the characteristics of the Series 60 platform and pick some of its brighter talent to dream up and pursue world-changing ideas. If you're a code diva, the Technology Showcase category is the place to show off your chops to a world audience.
For the Emerging Markets category, consider that there are people for whom a mobile device is the closest thing to a PC for miles, and cellular data is the only digital connection to a larger world. While multiple vendors are trying to make the $100 laptop PC to serve emerging markets, mountains of refurbished and donated Series 60 devices are ready to distribute for the cost of transportation. Even long-discontinued devices are surprisingly capable computers. Consider accessibility as an emerging market as well. Many Nokia devices have integrated speech synthesis and recognition. Finally, you might look at services or applications that are supplied in a limited number of languages and internationalise them.
For the Eco-Challenge, I suggest considering Nokia devices' prowess as radio transceivers. The cost of equipping environmental sensors (including current sensors for power monitoring) with Bluetooth, wi-fi, infrared, or even GSM signaling is dropping fast, and Series 60 native code can process, log, and respond to sensor input with minimal lag. Nokia devices are loaded with sensors of their own. Depending on what you want to measure, the cheapest sensor may be the Series 60 device itself.
My point is less about Calling All Innovators than it is about seeing mobile devices from top-tier manufacturers, with open and well-documented APIs, for what they are. They are mobile computers, capable of things you'd never dream of asking a smart phone to do. Someone once imagined running a web server in the background on a Nokia device, and someone else wondered what it would be like to drive a handset's display and keyboard remotely. Both of these stereotype-busting applications exist. Take them as inspiration and run with it.

Next-generation mobile is all about the cloud

"Cloud" has a special place in my hit parade of despised neo-techno-vernacular. Unlike Web 2.0, my all-time favorite, at least "cloud" is somewhat self-descriptive: Formless, vaporous, and a semi-reliable indicator of climatic conditions. If you point at a round, puffy cloud and declare that it looks like a pitchfork, and someone with you nods and says, "Cool, I can see that," the forecast is mostly patronizing with zero vision and periodic sucking up. You're in trustworthy company if that person says, "Are you blind?" If someone in a meeting refers to a cloud, or worse still, the cloud, don't nod just to keep the conversation going. Consider it your duty to ask them to define the term.

Intel's Nehalem levels playing field with AMD

At its recent Developer Forum, Intel laid out the last remaining secrets of Nehalem, a remade x86 platform built around a highly integrated Core 2 microprocessor with on-board memory and point-to-point bus controllers. I fairly raved about it, a fact that caught some who view me as an AMD die-hard by surprise.
Being a chiphead, but well shy of a chipmaker, I needed an independent perspective on Intel's first substantial effort to carry the x86 platform beyond the standards and boundaries set by IBM's PC-AT. Intel finally dumped the shared bus. That doesn't set a new high for the industry, but it certainly redefines Intel and lays down a new road for Intel-based servers and workstations.
While I had a head full of Nehalem facts, what I lacked was balance. Intel's IDF sessions on Nehalem compared the platform and CPU architecture only to Intel's work to date which, while I wouldn't call it sub-par, had been out-engineered by AMD.
For a valid contrast, I need to weigh Nehalem against AMD's own 45nm quad-core technology, dubbed Shanghai. That's a platform/architecture shoot-out for the ages, and I'm all over it. But since neither technology is shipping yet, all I can compare is detailed specs and higher-altitude rhetoric. The specs will take some digging. Today I'll tackle the rhetoric, the rationale behind Intel's design decisions, and the packaging of those decisions as competitive advantages. How much of what Intel's done with Nehalem is actually unique, and will IT feel the difference?
I brought a slate of questions to AMD and invited an expert in AMD server architecture and platforms, to address them. We covered an enormous amount of ground. The upshot is this: AMD celebrates Intel's validation of innovations that AMD designed into Opteron closer to the turn of the century. The x86-64 instruction set, non-uniform memory access (NUMA, which gives each processor socket its own RAM), Direct Connect (Intel calls its incarnation QuickPath) socket-to-socket bus, on-chip independent memory and bus controllers, independent power control for each core, internal power and thermal management, multiple processors on a single contiguous mask, dedicated Level 2 cache for each core, and shorter pipelines are features that AMD claims as firsts in the x86 domain. Intel once blew off each of these features as irrelevant. Now Nehalem and related platforms adopt all of them.
This is a good thing. When Nehalem goes up against Shanghai, it's apples-to-apples on platforms, or at least it will be treated as such. Even though the savviest server buyers could grasp the scalability advantages of AMD's NUMA and Direct Connect over Intel's shared bus, AMD won't be able to put across subtler differences between AMD's and Intel's implementation of the same ideas. When platform differences get small enough to require debate among gearheads, there is little chance of translating platform engineering variations into criteria relevant to mainstream IT.
Intel hopes to get some traction with a feature that AMD lacks, an integrated power microcontroller. AMD had no specific observations to offer on a feature that Intel kept secret until a couple of weeks ago. On servers, AMD believes that power conservation is done most effectively at the wall: If a server isn't working hard enough, turn it off. That's my line, but it's a long slog to get IT to take up this idea. In the meantime, servers should be at least as clever as desktops at using less energy when they have less work to do.
I understand that AMD is frustrated by the very roadblock I've called out: No matter how ingenious chip engineers are, if Microsoft doesn't pick a feature up, it's as good as wasted effort. This is especially true of power management. If server BIOSes and Windows Server OSes don't leverage Intel's power management microcontroller any better than they do quad-core Opteron's designed-in efficiencies, then Intel's bragging point will be lost on all but notebook users. If Intel has enough sway to make Microsoft twiddle Nehalem's power knobs and dials, or better still, let the microcontroller manage them itself, then Microsoft will have to answer for failing to invest as much care in exploiting architectural and platform features unique to quad-core Opteron.
Intel did more than catch up in cache design for the Nehalem architecture. The huge, shared Level 2 cache has given way to a much smaller Level 2 cache dedicated to each core. Like AMD (and like some previous Intel Xeon designs), Nehalem adopts a three-level cache. Intel uses the Level 3 cache to implement cache probe filtering, a technique that cuts down on core-to-core bus traffic. The handling of cache is a major and palpable differentiator between CPU architectures, especially as other engineering gaps tighten. There is a lot of room for innovation here.
Intel makes marketing hay with the sort of esoteric innovations that grab my attention, but which AMD asserts won't be felt on the server side. One such feature is HyperThreading (HT). This Netburst feature got the axe when Intel went to Core. I always considered HT one of Intel's bolder engineering moves, and now we'll see how it fares in a modern setting.
AMD is betting that instead of pulling up to a 30% increase in performance with ideally-tuned workloads, HT will bring single-digit boosts. In AMD's view, HT came about as a means of giving the chip something to do while it was waiting for memory. Now that on-chip controllers and faster RAM knock memory latency down to a fraction of what it was in Pentium 4 heyday, there isn't that much waiting time to fill. I have higher expectations in the long run. I think that multithreading will become the smartest way to squeeze more performance out of a socket, especially as programmers and compilers get smarter about parallelisation of code.
AMD considers it unlikely that server applications will feel other Nehalem platform and architecture enhancements, such as Version 4.2 of the Streaming SIMD Extensions (SSE) and Intel's Application-Targeted Accelerators. Both require recoding, perhaps hand-coding to put to use. I see tremendous potential benefit, but it's only reachable where developers are willing to risk incompatibility with other types of systems.
There's the rub. At the platform level, Nehalem's advances will be felt by IT without requiring any change in software. To feel Nehalem's power management and architectural (CPU) performance tweaks requires new code. That's effort that high-performance computing and specialised verticals like medical imaging will shoulder, but as a rule, major ISVs shy away from forking code to bring advantages to a small fraction of the x86 server installed base. And as much as I enjoy handwritten in-line assembly code, it's not your average in-house coder's cup of tea.
AMD hopes to call attention to the fact that Nehalem requires new servers. AMD's message is one of platform longevity. Between serious platform revisions, AMD lets customers and OEMs do system upgrades with CPU swaps and BIOS updates. AMD's OEMs aren't consistent in enabling this — I don't know how many IBM server customers were able to get dual-core to quad-core Opteron chip upgrades. Longevity should matter a great deal as budgets tighten. If Intel feels it finally has a platform with some headroom to it, it might relax the projected expectation of full system replacement every two years.
Since AMD had the floor to itself, it couldn't resist taking one last shot. Now that Intel has shed much of its legacy system platform, it faces a legacy core architecture. Nehalem is still a modified Pentium III core. During the time that Intel has spent cleaning up its platform, AMD, whose server platform has neither had nor needed overhauling since gen-one Opteron, has been working on CPU architectures. Its most anticipated project is Fusion, a CPU that brings AMD's microprocessor, embedded, chipset and GPU chops, along with some time-proven big iron ideals, to bear on a single socket. Until AMD can blow Intel away again, it's comfortable with a share of the x86 server market.
The fact that Intel's platform has caught up to AMD's doesn't automatically put AMD at a disadvantage. What it does is level the field, which breeds aggressive innovation and pricing as close competitors fight to differentiate their products. Apples to apples in x86 servers is good for IT.

Apps gets browser-like capability with WebKit

Inside every browser is the core of the ideal client-side application environment, incorporating everything that I'd estimate half of commercial applications need. There's the best dynamic, object-oriented, loosely typed programming language, bar none (JavaScript), transparently bound to a very simple yet extensible presentation layer (DHTML, CSS, XML, DOM, SVG...). Browser-based apps don't require specialised development tools, or any tools at all. All that keeps your browser from being the perfect client app environment is speed, stability, strict adherence to standards, and offline capabilities. That is not too much to ask; it's all within reach, right now. It's a matter of adjusting our priorities and perspective.
Forget about trying to make projects fit browsers and focus on higher objectives. We want a cross-platform, cross-architecture development platform that can take an app from behaviour-accurate prototype to full functionality in stages and with minimal skill. We want applications that can be QA'd in situ at the operational level, patched remotely, and updated automatically. We need to adapt to specifications that are altered while the project is underway. Code has to be ultimately reusable, and we need the capability to easily reach out to legacy back-ends. We never want to hear our support staff tell a user "we can't reproduce that problem". If we have to let a developer go, we want to know that he can't lock up his code on the way out, and that anyone of comparable skill can take over and be up to speed in a couple of days.
Now, knowing what we want, we can think about how to achieve all of this, and now we can come back to the browser. If we could achieve the ideal, a browser's got everything a development project would love to have but can't dream of putting into the schedule. The trouble is that browsers are designed for surfing, not as application platforms. Think about it. If you were cranking up a new client development project, would you issue a statement of objectives that it must look like a website, take twenty seconds to paint a window, offer no feedback when you click a button, skip reporting the progress of transactions, refuse to run unless you're connected to a network, and force users to re-enter form data if there's a hiccup in delivery? It's telling that the first thing an erstwhile web application does is free itself from the trappings of a browser: It removes the navigation bar, the menu (when it can — OS X doesn't allow it) and the status bar, redirects the right mouse button away from the default context menu and makes it impossible to resize the window. If you use the browser, the standard is to work like hell to hide it, and to solve performance problems by embedding Java or ActiveX objects.
It reads like a no-win deal until you realise that you don't need a fat, clunky browser. You don't need to host a browser in an application window. Just take the framework shared by multiple commercial browsers and bake it right into your project. That's WebKit. At a total cost of nothing and with free lifetime updates, it's as sweet a deal as you'll find, and unlike many open source projects that you'd love to use but which vary in the quality of support, documentation, and maintenance, WebKit is driven by companies like Apple, Nokia, and most recently, Google, that rely on it for commerce. The Iris Browser from Touch Mobile, which uses WebKit, is the first worthwhile free alternative to Internet Explorer on Windows Mobile devices, and it's the best first pass I've seen yet on borrowing iPhone's touch interface. Even though it's most frequently seen in browsers, WebKit is a lot more than a browser in a can. It advances the state of the art faster and farther than is required for a browser.
The best example of this is WebKit's new tokenising JavaScript engine, SquirrelFish. The latency associated with the retrieval of most web pages makes the speed of JavaScript execution a minor issue, but JavaScript's poor performance takes it out of contention as an application language. It's not a hard problem to solve; there is no shortage of engineers skilled at making interpreted languages run faster. There just wasn't the will to do it for JavaScript until some people realised that JavaScript is a serious language in need of a serious implementation. SquirrelFish takes two vital first steps toward elevating WebKit's JavaScript to first-class status: Mapping bulky JavaScript to more efficient, partially digested ("compiled") bytecode, and using a register model instead of a stack model.
The stack model stuffs all of the data passed between functions into a single pool of memory. It is the duty of every function that uses the stack to leave it precisely as it found it lest other function calls get the wrong data passed to them. Functions have to pull data from the stack to make local copies for their use, and to return data to the calling function they must shove the results back onto the stack. Stack-based interpreters are easy to write, but hard to optimise. Register-based interpreters use a direct reference (in loose terms, pointers) to data needed to call a function. Just this one change from a stack-based interpreter to a register-based virtual machine delivers performance gains of 1.5x to more than 3x depending on the operation, and that's how WebKit without SquirrelFish compares to WebKit with SquirrelFish. It borders on unfair to compare WebKit (Safari 3.1) performance to Firefox, but it does highlight the difference between a JavaScript for applications and a JavaScript for surfing.
It's not that no one thought of making a bytecode JavaScript, any more than it's a new idea to put smoothly scaled and animated vector graphics in a browser (SVG). No one cared to do it because the day-to-day surfing experience wouldn't be enhanced by it. WebKit has higher aspirations than surfing, and there is more advanced science in WebKit than its JavaScript interpreter. The whole framework is shifting into ever higher gears in performance, standards compliance, completeness, and stability. WebKit is a framework that brings the benefits of a browser to all applications, across platforms, even ones that don't use the network. It doesn't hurt that WebKit is free and open source, that a Safari-workalike browser is included in the distribution, and that it uses middleware (HTTP) and object representation standards (XML) that bind it to all back-ends. Get WebKit and be proud to use browser technology in serious applications.

Microcontroller steals the show at IDF

Intel Developer Forum has wrapped up, and there's no question that Nehalem owned the show. Intel's engineering crew was practically beside itself; finally, it had something new to say to software and hardware developers. It was hard to tell whether the phrase "most significant update to Intel's x86 in ten years", uttered often by Intel staff, carried a tinge of frustration, but Nehalem's specs elevate that mantra from marketing to reality.
When Intel opened its raincoat recently to reveal Nehalem's secret weapon — an on-package power management microcontroller — I shouted "that's what I'm talking about!". That's the way to bring more than lip service to green IT, guys, and sooner than most observers (myself included) expected. Sign me up for three-level cache architecture, hyperthreading and direct virtual machine links to physical peripherals, but if Nehalem's power management delivers its potential, and if Microsoft and Intel server OEMs exploit the technology, I'm open to declaring a new ball game in x86 servers.
Modern x86 CPUs are pretty stupid by mainframe standards. We got so caught up in making microprocessors fast, small and cheap that we scooped out the qualities that have defined server systems since we referred to IT as Data Processing. AMD, with its close relationship with IBM, looked on track to make x86 server CPUs serious machinery — self-monitoring, self-healing, self-reporting, made of multiple autonomous units that can be dispatched to specialised tasks without interrupting the flow of common work. I never expected that Intel would beat AMD to it, but if AMD had done Nehalem, I'd judge it a functional early pass at elements of the mainframe-inspired server CPU design laid out by AMD's CTO two years ago.
Intel baked a small sample of task-specific autonomy into Nehalem with a couple of small, highly specialised instruction units that I see as flagbearers, a preview of what Intel is able to add to future x86 CPUs in microcode or through some similarly simple mechanism. But the more impressive accomplishment is Nehalem's incorporation of a power management microcontroller. Intel claims that this will monitor temperature, power utilisation and workload, and apportion that workload among as few processor cores as are needed to do the job. Cores that would ordinarily divvy up mundane threads that could be executed more efficiently by a single core aren't merely idled, they're powered down. At least that's the pitch. Intel is a bit coy with the details, except to say that in transistor count, Nehalem's power controller is similar in complexity to an 80486 CPU. The message there is that Nehalem's power controller really is an autonomous unit, and Intel's use of the term "microcontroller" signals to me that it is externally programmable. I'll be disappointed if reality doesn't match the message.
The detail that Intel hasn't addressed relates to operational ownership of this microcontroller, and that's a particularly sticky point for me. As processors become more malleable, who gets to shape them, and who gets to shut the door to further changes? BIOS? Boot loader? Kernel? Device driver? I've addressed the opaque, proprietary control that independent BIOS vendors, system OEMs, and Microsoft exert over processor and device registers that have a dramatic impact on performance. Nehalem's power controller has similar reach with regard to power utilisation and scheduling.
To be blunt, it's a resource that Microsoft will want to own, or reserve the right to disown by overriding the power controller's settings with Windows' more primitive run-time controls. This is already seen in AMD Barcelona servers running Windows Server 2008. Left to itself, Barcelona can manage bus and core power beautifully without Windows lifting a finger. That's core to the CPU's design. Yet Windows can ignore BIOS and user-defined power settings, and there's no checkbox to disable Windows' power state manipulation.
There should be. By putting a microcontroller in charge, Intel's gotten the best kind of religion with regard to power control. Nehalem reads like a CPU with a built-in greenness dial. But you'll never feel it if BIOS and the OS lock it down, and if Intel doesn't provide developers and users with the means to grab control at run-time. If I want to run my server on one core over the weekend, I should be able to do that.
One thing that a microcontroller could be trained to do is hoodwink the OS into believing that it has control over the system's power state while the power controller does what it, or a savvy system owner, knows is best. Nehalem's power management controller is Intel's engineering secret weapon and a welcome advance in x86 technology. Let's just hope that Intel keeps it open so that it doesn't fall into the wrong hands.

Lost iPhone 3G makes the heart grow fonder

Well, this is embarrassing, but I might as well blurt it out: The iPhone 3G that Apple loaned to me was stolen. I spent many days praying I'd let it slide under the fridge instead of having my bag pocket-picked. I worked hard to get iPhone 3G, worked harder to understand it, and just when I was getting to the good part — the part where I moved my mobile persona from BlackBerry to iPhone 3G — it was over.
The irony is that while I saw iPhone 3G as a dandy business handset, I didn't see it replacing my BlackBerry. As I routinely do, I chose to challenge this untested assertion. Over the course of one long day and a longer road, I discovered that BlackBerry to iPhone 3G is a transition I'd enjoy making, one I might make by choice. That was the last time I saw iPhone 3G.
I can't let my loss blind me to the good that preceded it. I opened myself to my iPhone 3G epiphany during a seven-hour road trip (it should have been five, but that's another story) to AMD's headquarters in Austin, Texas. I spent that trip with a BlackBerry 8800 and an iPhone 3G resting on my passenger seat, playing "anything you can do, I can do better" with them whole way. It was a delight. I was not a paragon of highway safety that night, but I learned more from that trip than I did from a solid week of lab testing.
During the trip, the handsets' attention, and mine, were divided primarily among email, browser (news.yahoo.com and phone bandwidth tests on dslreports.com), and real-time navigation. Running Google Maps in its satellite view on BlackBerry (on T-Mobile's EDGE network) and AT&T's iPhone 3G side by side made for a self-running test of the handsets' GPS and cellular sensitivity, and the differences between AT&T 3G and T-Mobile EDGE cell data networks in speed and coverage. I had the BlackBerry 8800 and iPhone 3G zoomed to exactly the same level so that I could see at a glance how well each was managing to pull constant updates from the network and paint changes to the display.
One goal here was to get to the bottom of 3G, and I did. Of the roughly 250 miles of Interstate 35 between Dallas/Fort Worth and Austin, only about 50 miles was covered by 3G. Whenever I hit the centre of a 3G coverage cone, I was blown away by bandwidth of 500Kbit/s to a peak of just over 1000Kbit/s per second. Speed dropped sharply with distance from city centres; I saw EDGE-class performance around 150Kbit/s just before iPhone's 3G indicator winked out and the radio re-acquired with EDGE.
3G, I've discovered, is not wi-fi lite. The aspect of 3G that doesn't come up in advertising is its killer latency, that being the delay between a client's request for data and the first bit of the server's response. On the 3G network, I measured packet delays server-to-phone of as long as 600 milliseconds, with 300 to 350 milliseconds being typical. By comparison, cable and DSL latency ranges from 30 to 70 milliseconds. Because web pages are made up of dozens of little files strung together, latency can overcome bandwidth such that a complex web page does not render markedly faster on 3G compared to EDGE. 3G is a blessing in email, a subject that I'll take up shortly.
My decision to mix tasks on the devices revealed differences in their usability. On the BlackBerry, I switched periodically among email, Google Maps, BlackBerry's standard-issue browser, and TeleNav, the last of these being a native turn-by-turn navigation system upon which I've become hopelessly dependent. BlackBerry runs these apps simultaneously, and I have a button assigned to switch from app to app. On iPhone 3G, I mixed it up with Google Maps, Mail and Safari. Apple's iPhone SDK doesn't permit simultaneous running of applications, but programs usually save their state when they exit and recover it on launch, giving the appearance of task switching.
BlackBerry's multitasking lets the voice guidance from TeleNav break through no matter which app is in the foreground. When you're reading mail or even taking a phone call, the BlackBerry TeleNav lady pops in with status and directions (the other party to your call doesn't hear them). That's not possible without background operation. Apple's official position is that iPhone apps may not run in the background, and turn-by-turn navigation is singled out as a mustn't-do for developers.
Both BlackBerry and iPhone 3G (and iPhone as well) truly push email. I tested against my lab's Exchange Server 2007, running in a virtual machine under Leopard Server on Xserve as well as through Apple's MobileMe and BlackBerry Internet Services (BIS). In all cases, once a message was sucked into what Apple calls "the cloud", which is the BlackBerry- or Apple-hosted delivery or notification network, the handset picked it up. BlackBerry push is instantaneous and it can squeeze the initial fragment of message through one-bar coverage too weak to support a voice call. This is the legacy of the two-way pager model.
iPhone 3G takes a few seconds to get a push message, but within a 3G coverage area the extra bandwidth makes it more likely that a message with attachments will be in your iPhone 3G inbox when you open the mail client. Even in EDGE coverage, iPhone 3G can pull in, unpack and display a message with rich (Office, PDF, iWork '08, HTML) document attachments far faster than BlackBerry because the document viewer is embedded in the framework, and iPhone 3G's UI is faster and friendlier than BlackBerry's for navigating in documents larger than the display.
No one mobile device does everything you'd like it to, but I can tell you what makes me miss iPhone 3G. I could forward all of my email, attachments included, and instead of making myself crazy tuning filters to block it, I'd just let memory-wasting spam slip through. That's what iPhone 3G's 16GB of flash is for. Like most everyone else I know, my inbox is my database, reaching back for months if not years, and I really felt secure knowing that if my servers caught fire, if my house was knocked over by a tornado, all of the irreplaceable information that's archived as email and attachments would be safe in my pocket.
I finally set up a new domain with a fresh Exchange Server 2007 setup (virtualised on an eight-core Xserve). This is taking a production validation beating as I write, and the point of setting that up is to skip from BlackBerry to iPhone 3G without an intermediate stop in the consumer-targeted MobileMe. I really want to see how device management is handled. I'm motivated to learn how well remote device locking and blanking work.
I am humbled and more than a little embarrassed by the loss of iPhone 3G, but now that it's taking shape, my BlackBerry-free project is too good to shelve.

iPhone hackers go too far, get shut down

I was all set to give this week's column over to a new register-direct implementation of a JavaScript interpreter that's many times faster than all currently available implementations. It's not exactly growing hair on a billiard ball, but a nitro-boosted JavaScript will put a shine on AJAX and keep my most beloved language on track to becoming the gold standard for dynamic languages.
Apple decided to nix that story in favour of yet another iPhone piece, this one to celebrate the short life of a project that opened the iPhone and the iPod Touch Unix to developers. The keepers of the project are responsible for its demise, because they made it impossible for Apple to discern between innocent developers looking to create an unencumbered open source community on Apple mobile hardware, and those who want to force Apple to break its exclusivity deal with AT&T.
Up until a couple of days ago, it was possible to develop software for iPhone 2.0 devices (the iPhone, iPhone 3G, and iPod Touch running 2.0 firmware) without the encumbrances of Apple's onerous developer contracts and code-signing requirements. A very tidy iPhone 2.0 app called Cydia set up an App Store equivalent for open source developers and those interested in sampling their wares. With Cydia, there's no credit card required, no tracking of who had downloaded what, and no restrictions on the capabilities of applications.
Open source software for iPhone 2.0 is produced and traded within a relatively small community that, in the majority, exemplifies the commandments of ethical hacking: Don't create victims, don't take money out of anyone's pocket, and make sure that the community's influence stays within the community. In other words, no malware, no piracy, and no infiltration among the non-savvy. If you keep to these rules, a community of hackers will generally be tolerated. Apple has quietly allowed open source iPhone development since the original iPhone was introduced. The community was gaining ground and respect. Books have been published, and one iPhone open source community leader addressed an SRO crowd at no less than an Apple Store.
Wherever treasure is unearthed, pillagers gather. iPhone open source development was enabled by a pre-SDK project to "jailbreak" iPhone 1.x firmware so that user-created iPhone applications could be installed and run. This required changes to the firmware, but it could be done without redistribution (Apple makes it freely downloadable). After jailbreaking came research into unpublished APIs and into the extent to which POSIX APIs were supported.
Open source development got under way in earnest, but for some of the people who undertook it, the jailbreak project was a stepping stone towards the ultimate goal of unlocking iPhone for use on any carrier's network. This was primarily a reaction to Apple's US exclusive with AT&T. I'm not crazy about that either, but hackers need to understand that Apple is contractually obligated to keep iPhone owners locked to Ma Bell's network. That means that Apple has to attack well-publicised efforts to unlock its device until its deal with AT&T expires.
iPhone unlockers recently issued a foolhardy boast that put them on the front page. They claimed that they had successfully unlocked the first-generation iPhone, using nothing but software, in such a way that Apple could not relock the device to AT&T. A Mac utility called Pwnagetool gave nonsavvy users a foolproof means to jailbreak and carrier-unlock their first-gen iPhones running 2.0 firmware.
I ran Pwnagetool on my iPod Touch because I needed a secure shell (SSH) client for use on my wireless LAN. There is no cellular radio in an iPod Touch, so unlocking doesn't enter the picture. The tool is easy. Cydia pointed me directly to the open source package I needed, which turned out to equip the iPod Touch with an SSH server as well. Yup. The iPhone open sourcers can run background processes on your iPhone. It's fun to SSH into an iPod and run a shell session, but I found reaching out from the iPod Touch to my servers far more useful.
Apple's 2.0.1 firmware update accomplishes what hackers had claimed Apple couldn't do: It relocks an iPhone to AT&T. The original boast was predicated on the fact that through all of its prior updates, Apple had never updated the baseband (cellular radio) firmware. Well, 2.0.1 breaks this tradition, and it breaks unlocking.
Apple's iPhone 2.0.1 firmware also breaks iPhone open source development. My iPod Touch, which never made any trouble for AT&T or Apple, and never cost any App Store vendor a dime in lost sales, won't run Unix apps any more. I'm back to hauling a notebook around when just my iPod Touch would do.
Maybe the iPhone open source community will hack the iPhone open again. In the meantime, it's still possible to operate an iPhone or iPod Touch with open source jailbreak by avoiding the 2.0.1 firmware update, but as it does with iTunes, Apple is adept at turning voluntary updates into a practical necessity by making related products dependent on the latest update.
There is an amicable way out of this. The best thing for all concerned would be for Apple to enable iPhone 2.0 open source development and the running of unsigned applications (such as shell or Python scripts), but only for device owners who explicitly consent to it. I'm all for protecting users from unwittingly welcoming nonpedigreed software into their iPhones. I'll be big about it and set aside the fact that an Apple-issued pedigree doesn't make software run any better.
An open source iPhone community benefits Apple by turning the iPhone into a platform in the Mac sense of the term, and this isn't at odds with Apple's App Store venture. Yes, iPhone unlockers spoiled the party for everybody. But Apple can lock out the unlockers while letting the iPhone open source party go on.

Fine-tuning systems isn't a server admin's job

While using an AMD Barcelona (quad-core Opteron) server to create a portable benchmarking kit for InfoWorld's Test Center, I discovered something unexpected: I could incur variances in some benchmark tests ranging from 10% to 60% through combined manipulation of the server's BIOS settings, BIOS version, compiler flags and OS release.
I attempted to document the impact that each individual change had on performance, but flipping one switch often changed the effect of all the others. This frustrating yet fascinating effort ran aground when I had fiddled with settings and flags so much that my testbed stabilised; I could no longer budge the results no matter what I did. Normally stability is a good thing, but in this case, I was more interested in investigating the variance than in eliminating it. If I tried to bring this case to an actual engineer, he or she would likely tell me it had all been a mirage.
Recently, I had an opportunity to put this matter to a whole panel of engineers, the brain trust that manages performance and power testing for AMD's server CPUs. I was told that I was seeing an effect that's widely known among CPU engineers, but seldom communicated to IT. The performance envelope of a CPU and chipset is cast in silicon, but sculpted in software. Long before you lay hands on a server, BIOS and OS engineers have reshaped its finely tuned logic in code, sometimes with the real intent of making it faster or more efficient in some way that AMD hadn't considered, sometimes to compensate for overall server design flaws, and sometimes to homogenise the server to flatten its performance relative to Intel's.
Perhaps there have been cases where AMD servers were made more powerful or efficient through software tuning that deviates from AMD's advice. I sincerely doubt it. Most times, trying to out-think AMD's engineers is a fruitless exercise, but system and OS makers do it all the time. When they get it wrong — and this is far easier to do than getting it right — it costs you. You end up with systems that aren't performing to their potential, are letting power efficiency features go unexploited, or both.
AMD has performance engineering teams devoted to the science of optimisation. Before a single system is built using any new family or major revision of AMD64 microprocessor, AMD issues detailed documentation listing each CPU's capabilities and tunable parameters. New CPUs and AMD-built chipsets go out with reference BIOS code that puts the processor in an optimal state before booting the OS. I met the engineers that develop the guidance for system makers, BIOS vendors and the OS development teams at Microsoft, Red Hat, Suse, and Sun. They're no amateurs; I'd trust their advice. But once they do their jobs, the tuning of each system sold with AMD CPUs is out of their hands. The tiller is turned over to software.
The BIOS gets in there first. Machine code in the BIOS walks through the CPU's parameters and initialises them based on some combination of AMD's advice, the system administrator's preferences as expressed through configuration settings, and the whims of the system maker. Manufacturers contract for the development of the BIOS firmware in their servers. They determine what an admin can adjust, as well as the settings of all the things you can neither see nor change.
You'll never find a server shipped from the vendor with overly aggressive settings. Systems may be tuned downward to operate at the widest possible range of temperatures, to accommodate cheaper components, or to throttle performance in order to compensate for inadequate cooling design. Tuning can also undercut system performance in misguided attempts to meet energy efficiency targets, when that objective could be just as well served without sacrificing performance. For example, slowing memory access can cool the system considerably and lower its power draw, but the cap on performance means that tasks take longer to complete, increasing the time that the system spends drawing maximum power.
The OS also presents an interesting issue, especially with Windows. I was surprised to learn that starting with Vista, processor drivers, critical to controlling power states, are being written exclusively by Microsoft. AMD knows a thing or two that Microsoft doesn't about tweaking an AMD64 CPU for speed and efficiency. Microsoft wants to handle this on its own, and tuning within Windows Vista and Server 2008 does not take the unique characteristics and advantages of AMD's architecture into account.
The big problem is that there's no way for IT and end-users to find out what they're missing. It is possible to dump the myriad registers affecting performance, but they're meaningless to mortals and many can't be changed without disrupting operation. Short of writing your own BIOS, there isn't much you can do. Maybe that will change. The secretive relationship between chipmakers and OEMs doesn't always serve customers well. The configuration advice that AMD issues to its OEMs, BIOS vendors, and OS vendors could form a sort of fingerprint. Even without an understanding of the meaning of individual registers and flags, patterns of variance can point to a vendor's agenda for diverging from best practices. If nothing else, IT could ask why AMD's advice wasn't followed. There may be perfectly good reasons, reasons that differentiate one server brand from another and show who's been doing their homework.
No chipmaker would ever single out an OEM for praise or scorn. AMD's no exception. While AMD's testing engineers express frustration that their recommendations are a take-it-or-leave-it affair, and that when their advice is set aside it affects the public's perception of their CPUs, they don't take it out on OEMs either on or off the record. AMD figures that this is the way the system works when you're on the 20 side of a market that's split 80/20.
The system needs to change. AMD is building new classes of high-powered client platforms that are wide open to end-user parametric tweaking. Enthusiasts and gamers do pay attention to AMD's advice with regard to performance, and they're driven to pull the maximum possible performance out of AMD's silicon. This serves AMD well, because when third parties do this and write about it, AMD doesn't have to out OEMs for taking a lazy approach to configuration. Enthusiast-tweaked machines create a best case, and makers of desktops sold into the high end will have to explain why they don't live up to best case numbers. That's not being done for servers, in large part because server enthusiasts willing to do exploratory tweaking of their machines are rare. I only know one such person.
As mainstream server CPUs grow from four to six to eight cores, four socket servers become the norm and deeply multi-threaded applications come to predominate, tuning the CPU, chipset, bus and memory becomes crucial, with a direct impact measured in dollars, hours, and watts. This is tuning that administrators shouldn't be required to do by hand. They should be able to trust that when a system hits their floor, it performs as well as its technology permits. This requires that vendors put some effort behind understanding and leveraging the differences between AMD and Intel architectures — effort that isn't a priority at present. This mystifies me, since AMD does all of the legwork, freely handing vendors BIOS and kernel guidance that started taking shape when the CPU was still in simulation. It takes a lot of work to ignore the chipmaker's advice, and so far, I've seen no evidence that it does customers any good.

iPhone contracts leave developers speechless

Apple apparently chose the best possible template for its iPhone developer programmes: its own Apple Developer Connection for OS X. Why it then made the iPhone SDK confidential even for those who download it for free poses a puzzling contradiction in the company's seemingly open approach to development.
The basic ADC membership is open to everyone and free of charge. All you need is a verifiable (at least temporarily) email address to obtain a free Apple ID. Free and paying ADC members get exactly the same commercial-grade development tools, samples and docs. Depending on their membership level, paying members get additional access to pre-release software, prepaid tech support incidents, hardware discounts, and WWDC tickets. Free members face no disadvantages when it comes to creating and distributing applications for the current or prior release of OS X. That cornerstone of the Mac platform accounts for its large catalogue of high-quality free and inexpensive applications, as well as its loyal and welcoming community of developers. ADC is the magnet that draws developers to the Mac from other platforms.
By choosing ADC's tiered programmes as a model, it seemed that Apple had tilled the field for an instant and vibrant iPhone developer community. Then it salted the ground by making the iPhone SDK confidential even for those who download it for free. The upshot is that every citizen of planet earth can get the iPhone SDK at no charge, but they're contractually obligated to Apple not to discuss the SDK or exchange ideas with others. The agreements leave no room for forums, newsgroups, open source projects, tutorials, magazine articles, users' groups or books.
The terms and conditions of the most restrictive agreements to which all iPhone developers are bound are secret. A few sentences from the nonconfidential iPhone Registered Developer Agreement (PDF) are sufficient to illustrate the breadth and severity of the restrictions. As is always the case, you must not rely on my excerpts or analysis as a summary of the agreement or as legal advice.
From Section 3, Confidentiality: "You agree not to disclose, publish or disseminate any Confidential Information to anyone other than to other Registered iPhone Developers who are employees and contractors working for the same entity as you and then only to the extent that Apple does not otherwise prohibit such disclosure in this Agreement."
What's "Confidential Information"? The Agreement contains two definitions, one in Section 3 that's broad and rambling but with specific and liberal exceptions, and one in Section 4 that's concise and inescapable:
From Section 4, iPhone Materials: "All iPhone Materials shall be considered the Confidential Information of Apple"
Section 4 also makes unrestricted allowance for additional tightening of screws (not quoted in full): "All use of the iPhone Materials shall be subject to this Agreement, unless such iPhone Materials are accompanied by a separate license agreement, notice or disclaimer (collectively, "Other Agreement") in which case such Other Agreement will govern to the extent of any inconsistencies with this Agreement [...]"
There are two Other agreements (the secret ones): one that governs free use of the SDK and the other, responsibilities of iPhone Developer Program members. I have no problems with the latter. When money gets involved, that changes the rules, and Developer Program members have access to trade secrets. My problem is that Apple brands publicly available information — that is, the released and freely downloadable iPhone SDK — as confidential. Laypeople who are ill equipped to interpret the secret Agreement attached to the free iPhone SDK are likely to assent to it without reading it, if they're aware that it applies to them at all.
I can't discuss the secret SDK Agreement, but you can read it for yourself by signing up as an iPhone Registered Developer.
This isn't Apple-bashing. This is serious business. You'll see arguments from armchair legal analysts that the iPhone developer Agreements won't stand up in court — but those analysts certainly won't stand up in court on your behalf. When you download the SDK, you grant Apple special rights to injunctions and suits against you for unspecified damages in addition to their rights under the law.
The iPhone developer Agreements covering the freely accessible iPhone SDK are not EULAs that you can blindly click to sign without study. It turns out that the iPhone developer programmes are the antithesis of the developer-friendly Apple Developer Connection. The iPhone Agreements are risk-laden contracts that make the iPhone SDK one of the most dangerous downloads on the internet. It is certainly the most heavily encumbered free software I've encountered.
If you're planning a forum, newsgroup, users' group, open source project, book or any discussion of iPhone development, the only path to protection from liability is explicit written approval from someone at Apple. Have a lawyer draft your request for exemption, and make sure that the Apple staffer granting it personally commits to status as authorised to approve exceptions to the iPhone Registered Developer and iPhone SDK Agreements.
The concerns I have expressed relate only to free access to the SDK. Terms of the paid iPhone Developer Program are appropriately confidential, and in my view, Apple offers paying individual developers a generous balance between benefits and responsibilities. This said, shutting the door to all opportunities for discussion of the freely available iPhone SDK hurts all iPhone developers.

Apple's MobileMe offers Mobile life without Exchange

The competitive marketing brickbat that Apple flung at BlackBerry — that BlackBerry's push email works only with Microsoft Exchange, as if Exchange were an onerous burden — has quietly vanished from Apple's campaign.
Exchange Server turns out to be the only customer-hosted messaging back end supported by iPhone 3G and first-gen iPhones running 2.0 software. It's true that BlackBerry requires BlackBerry Enterprise Server (BES), but BES integrates with Domino and Groupwise as well as Exchange, and BES works transparently with non-BlackBerry devices through BlackBerry Connect. I'll always be here to set the record straight.
If you balk at the extra $3,000 to $10,000 it takes to strap BES onto Exchange, then your needs are more basic. You may be best served by a third-party hosting provider, but even that can be overkill for individual professionals and small businesses. RIM's solution for individuals is BlackBerry Internet Services (BIS), its own hosted push messaging. BIS is bundled free with T-Mobile's BlackBerry coverage plans (I can't speak for other carriers), and it replaces an earlier consumer-targeted service that included a web-based mail reader and server-side message filters.
I liked that service, but it carried a stringent limit on mailbox size, which BIS does away with, in addition to the Web interface. On T-Mobile's network, messages aren't stored where you can get at them using anything but your BlackBerry, but BIS can keep an unlimited number of messages in flight until they're either fully delivered or they bounce to the sender after several days of failed delivery. BIS can maintain multiple mailboxes for each subscriber, with separate folders on the device's home menu and dedicated client-side filters (for example, vibrate for VIP messages even when the phone is in quiet mode). You can gateway POP3 mail through BIS, and although POP isn't inherently push-capable, once BIS picks up a message, it follows the same assured delivery path as any other BlackBerry missive.
Anything that's free comes with a catch, and in the case of BIS, it only handles email. You can send and receive appointments and individual contacts packaged as standard email attachments, but they don't hit your calendar or address book until you open the attachment. Also, unless you're running BES, your calendar and address book live only on your device until you manually back them up on your desktop. BIS affords users no gateway to the sort of live collaboration, shared folders, and instant messaging offered by Exchange and BES. That's why "enterprise" is BES's middle name.
Apple didn't frame .Mac, its subscription-based online service for Mac clients, as a solution for professionals. However, Steve Jobs touts .Mac's evolved form, MobileMe, as "Exchange for the rest of us" — quite a boast indeed. MobileMe, which costs US$99 per year or Us$149 for a five-user pack, is the only way non-Exchange users can get push email to their iPhones. MobileMe has that in common with BIS, but the similarities end there.
I am a longtime .Mac subscriber, so I'm familiar with MobileMe's features: 10GB of sharable online storage, slick AJAX mail, address book and calendar clients with sweet touches like recipient completion, the requisite personal website/blog, and photo gallery.
Fairly recently, .Mac took on a couple of new roles custom tailored for professional users. It provides manual or scheduled synchronisation of contacts, bookmarks, appointments, mail rules and mailboxes (not messages) across multiple Mac clients. Everything synced with your Mac is reflected immediately in MobileMe's web interface. Back to My Mac, also relatively new, is a secure screen-sharing gateway that burrows through residential Internet providers' NAT and router firewalls. For those of us who have more than one Mac, .Mac is our sanity's savior. MobileMe is at least that.
A simple characterisation of MobileMe is that it's .Mac with iPhone support. That would be enough to recommend it, but there's much more to it. MobileMe adds push capabilities for email, calendar and address book so that the clients and devices you enrol for synchronisation with your MobileMe account are updated within seconds of any change made by a client or via MobileMe's web interface. The update delay, as demonstrated by Apple, is minor — short enough to allay my suspicions that iPhone is just polling MobileMe at close intervals.
Based on Apple's claims, which I can't prove out until my iPhone 3G arrives tomorrow, MobileMe has the power to elevate iPhone to lead consideration among smartphones for mobile professionals. If MobileMe doesn't already read too good to be true, consider the grace note: Windows Outlook clients can now be joined to MobileMe's pool of push-synced clients. If calling MobileMe "Exchange for the rest of us" doesn't target MobileMe at individual professionals, then support for Outlook, which is hardly the mail client of choice for home users, makes a clearer case; $99 per year for push email plus over-the-air, cross-platform desktop/device sync is an absolute no-brainer.
I've contemplated, but not tested, the notion that Apple might have used ActiveSync or a protocol enough like it to fool Outlook, to push MobileMe messages and updates. Why not? We know that Apple licensed ActiveSync for its iPhone 2.0 software. I'm getting ahead of myself by imagining MobileMe as a premier individual messaging and sync service for Windows and Windows Mobile smartphones, but that'd be a kick in the head.
BIS may end up looking pretty anemic compared to MobileMe from a features perspective, but it executes its limited feature set flawlessly. There's no need to qualify my recommendation of BIS to a professional user that only needs push email with guaranteed intact delivery in both directions. I have used BIS in that role, on and off, for years. The problems I've had with it have been of my own making. MobileMe has to prove itself to be a bulletproof individual push messaging solution above all else, and I'll be taking regular and ruthless shots at it as I give iPhone 3G a chance to serve where BlackBerry has gone before it.
Enterprises don't need a stand-in for Exchange Server. Neither BIS nor MobileMe permit the building of workgroups, and they don't enforce policies or otherwise enable central management of devices. Organisations with these needs that have even a few handsets in their fleet will find Exchange, Groupwise or Domino a necessity. iPhone will have to earn its reputation as an enterprise device by mating with Exchange Server as seamlessly as my BlackBerry and Windows Mobile devices do. But iPhone also has to satisfy the needs of one, two, or five users. MobileMe puts Apple on an ambitious path toward that goal.

What does Nokia's Symbian move mean?

How many mobile device manufacturers does it take to keep the most successful handset operating system alive? If you guessed "one", you're right. If you guessed "five", you're right. If you're confused, you're in good company.
Nokia recently acquired Symbian, the developer of the Symbian mobile OS licensed by Nokia, LG, Motorola, Samsung and Sony Ericsson and used as a system software component in their mobile platforms. Perhaps it's just me, but I find this story remarkable on many levels. Knowing no more than the single fact I've presented, it might appear that Nokia, which already held a controlling interest in Symbian, is moving to pull the rug out from under its competitors. It turns out that Nokia's not the latest antitrust bad boy. Put a cape on Nokia, because it is a champion of corporate trust, or whatever anti-antitrust works out to be. Symbian's engineers will wear Nokia badges, but every line of code they crank out will be turned over to Nokia’s competitors, and later, to the world.
Nokia will bring Symbian's development operations in-house, but Nokia won't own Symbian's intellectual property. With the purchase, Nokia simultaneously established the non-profit Symbian Foundation, whose members include five Symbian licensees as well as a trio of major wireless carriers (NTT DoCoMo, Vodafone, and AT&T) and a pair of embedded semiconductor manufacturers (TI and STMicro). The Foundation effectively owns Symbian OS.
If the Symbian Foundation membership seems an odd assortment, consider this: If Foundation members could agree on a set of objectives, it might be able to drive a new device from concept to wireless network deployment in a fraction of the time it takes today. Just eliminating the duplicated effort that each handset maker wastes in homebrew workarounds for issues that are fixed in later revisions of the Symbian OS would be a huge shared win. Having all of the vendors tap the same code spigot at the same time would make patches easier to distribute and, dare I imagine it, make it possible to write applications to Symbian OS that run on multiple brands of phones. That's pretty much the message of the PR accompanying Nokia's announcement.
Nokia bought Symbian with the stated long-term intention of giving the OS away as proper open source, a detail that has drawn the focus of observers who see the Symbian Foundation as a bulwark against Google's Open Handset Alliance. My own knee-jerk response was to view the Foundation as a circling of wagons against iPhone. The timing of the announcement, about two weeks before iPhone 3G's availability, strains coincidence. Whatever the perceived external challenge, the Symbian licensees felt compelled to come together to do something that they couldn't achieve separately. I wonder if Symbian OS was about to hit a wall that could only be avoided if the strongest engineering force among the Symbian Five took over development.
Handset makers in the Symbian Foundation have to prep themselves for the next wave of high-end handsets. The brick on your belt can't merely make phone calls and swap text messages. It will have to be a camcorder, a still camera, a television, an iPod, an Office desktop, a wi-fi/Bluetooth/cellular wireless data gateway, a navigator and presence beacon, a 3-D game console, and an internet tablet. As outrageous as it seems, especially considering that most high-end devices are targeted at commercial buyers, device manufacturers will have to treat this "prosumer" feature set as baseline and innovate within and above it.
The engineering catch is that these advancing features must be accommodated while simultaneously lengthening battery life from model to model. That job falls largely to the OS, and I think the Symbian Foundation came together because the licensees decided that none of them could get to next-wave devices with Symbian as it is.
That wouldn't be surprising. Google and Apple decided that paving the way for uberphones (I'm running out of neologisms) justified a trip back to the drawing board for system software. Each Symbian licensee has checked out Symbian alternatives; Nokia and Motorola, at least, have non-Symbian devices in their lineups. But each has decided that for its next push, it has to use what it's got, and that's Symbian.
What TI and ST bring to the Foundation is new generations of application processors. Texas Instruments, a hometown favorite of mine, just introduced a nifty US$50 chip sandwich that combines two RISC CPUs with a 3-D graphics accelerator, memory, and a plethora of peripheral controllers. $50 for a single component telegraphs the cost of a device that might integrate and use it to full advantage — at least $500 before wireless carrier subsidies. Since the ARM CPU architecture dominates mobile devices, TI's new component could be used by any modern mobile software platform. But when reviewers start expressing mobile device metrics in terms that include frames per second and megabits per second, an OS' dexterity at juggling hundreds of threads while remaining responsive to the user becomes paramount.
The devices that blow the doors off iPhone 3G already exist in simulators. Market ranking and device capabilities won't be decided by hardware. It will all come down to software. As unglamorous as Symbian or any other operating system may be, no mobile device vendor can lead the market without leading in OS engineering. Whether the Symbian Foundation can put Symbian OS on the space age trajectory is an open question. If it can't, there will be five handset manufacturers standing at the sidelines when the next major mobile leap is taken.

Digital TV hints at erosion of internet rights

With regard to the free exchange of information over the internet, we, the people, have mostly managed to hold our ground. We can thank activists, hacktivists, legislators saying "no, thanks" to money from the entertainment lobbies, and forward-thinking artists and content distributors — I'm proud that writers and publishers took the lead on this — who recognise that reach is the currency of the digital age.
We should take internet providers' arbitrary blocking and throttling of BitTorrent traffic as a warning sign of descent down the slippery slope towards the loss of internet freedoms. The rationale points to the bandwidth wasted by BitTorrent. That doesn't ring true. There are other flavors of traffic such as VoIP, streaming news, advertising and entertainment, photo galleries, remote PC access, Usenet repositories, denial of service attacks, and spam, which all consume beastly amounts of bandwidth. But somehow, none of these warrants detection and control at the provider's end of the pipe. It makes one wonder, what's so special about BitTorrent that it cries out to be controlled in such a radical manner?
That's an easy one. The entertainment lobby (my shorthand to avoid spewing the alphabet soup of movie, TV, and music trade groups), having failed to get the US government to impose a tax on videotapes and recordable discs, or to hold Internet providers liable for copyrighted content transferred through their networks, or (so far) to add a piracy tax to every broadband user's monthly bill, is using the most powerful weapon yet devised: "Standards."
I put that in quotes to differentiate it from true standards. Analogue television, for example, works because standards and regulations ensure the interoperation of transmitters and receivers. These standards take the public good into account. The move towards digital television, which will be complete in February 2009, is attended by standards and regulations constructed to ensure interoperability and to guard the public good as well. No broadcaster can arrange that a digital TV signal require a non-standard receiver, for example, one that bills your credit card every time you watch a popular show on an over-the-air (OTA) digital channel.
The very characteristic that makes digital TV look so good is the one that makes it so vulnerable to restriction and manipulation: A TV broadcast is no longer a signal, it's a bitstream, one that has far fewer points of origination than the internet and is therefore easier to control. Digital TV is rapidly heading for precisely the sort of lockdown that entertainment and broadcast lobbies desire for the internet, and, to the extent that they can be used as video players and recorders, our PCs, Macs, and notebooks.
The primary example of digital lockdown is HDMI, the High Definition Multimedia Interface. Simply put, HDMI is how you get digital video into a high-definition TV. HDMI looks like a dream come true: A single cable with a small connector passes digital video, digital audio and control signals. HDMI has always incorporated High Definition Copy Protection (HDCP), but for a long time its enforcement was relaxed. You could hook an LCD computer monitor to a cable box or DVD player with an HDMI output. All you needed was a $20 HDMI/DVI adapter.
It doesn't work that way now. If you plug an LCD monitor into a late model DVD player or other device with an HDMI output, all you'll see is text telling you that your device is incompatible. If it were truly incompatible, it wouldn't be able to display that text. Wait, it gets better.
Let's say you do spring for an HDTV with HDMI input. Depending on the maker of your cable box or DVD player, if you plug an HDMI cable into your TV, the device turns off all of its analogue outputs. Simply put, the price for upgrading your TV to digital is that your existing VCR, DVD recorder, and video-capable PC or Mac go blind. I can make recordings of digital and analogue cable programs, but only if I go behind my equipment rack and yank the HDMI cable out of my set top box. It gets better still.
HDCP requires credential handshaking that's prone to errors, forcing many consumers into the ludicrous practice of rebooting their TVs (mine runs Linux) to recover permission to watch them. I've had to update the firmware on my TV and amplifier to address HDCP issues, and it's still buggy as hell.
The lesson I learned from this is not to waste my money on HDMI cables. By trying to sneak martial law into a digital video interconnect standard, entertainment forced consumers to retreat to readily recordable analogue even for their high definition content. Fortunately, the quality of component video, the three-cable analogue connection supported by all HDTVs and high-definition devices, is indistinguishable from HDMI in well-made equipment.
Can't a computer with a digital TV tuner and a DVD drive solve this whole mess and allow all-digital connections? It ought to. A copy of Vista Ultimate and a $129 TV tuner are ostensibly all that's required to turn a PC into a combination digital cable box and video recorder. But not so fast, friend. Have you met the broadcast flag?
The broadcast flag signals receiving equipment that recording is not allowed, not even to videotape. A broadcaster can stream this flag into any program it chooses. Nothing can be sold in the US that doesn't respect the broadcast flag and pass it downstream. Yes, I am aware that the Federal Communications Commission's mandate for the broadcast flag was overturned by a Court of Appeals. This simply means it doesn't have federal enforcement. The entertainment lobby still has the power to impose its will on technology companies. Some companies have proved more eager to eat from entertainment's hand than others.
Microsoft baked the broadcast flag into Vista, a fact that was revealed last month when NBC inadvertently flipped on the flag for an episode of American Gladiator. Vista-based Windows Media Centre systems tuned to NBC refused to record the show. I'd take a cheap shot about this programme's popularity among Vista users, but Vista users weren't alone. Everybody loving their new digital video recorders got hit by the blackout. NBC said "oops" and Microsoft said "so what?". Let's start a pool on how long it will take before we're paying for reruns of the network TV shows we pick up with rabbit ears or pull from basic cable.
It disappoints me deeply that not one vendor told entertainment to get stuffed. The closest thing I've gotten to a statement from a vendor that's been in the back room with entertainment came from ATI, fresh from its AMD buyout and jazzed about a recent win. It reads: "We're one of the first to ship Blu-Ray player software with our hardware." Later in the discussion, I was told that "ATI has reduced the risk of unauthorised access to the frame buffer." Given that frame buffer access enables recording video to disk, I didn't have to ask who was considered unauthorised.
It would seem that the internet, being so anarchistic, won't have its arm twisted so readily by the entertainment lobby, but internet rights restrictions come through your telecommunications equipment. It would take an act of Congress to force a change to firmware of networking devices to restrict traffic based on content. There will be no broadcast flag for files that don't start life as commercial content. The vendors who make the components and operating systems that run our laptops and desktops see broadband digital entertainment as the next frontier, the next great driver of sales and services. The entertainment industry declared that there is no path to riches but through them, and that path requires paving over a few of your freedoms.
Unless, that is, you download your entertainment through BitTorrent. Does it meet the definition of "irony" that it's far easier for an unskilled person to do this than to deal with HDMI, HDCP, broadcast flags, frame buffer blocks, and other nonsense created specifically to frustrate consumers' efforts to enjoy digital entertainment?

Here's hoping for some iPhone competitors

One of my live blog entries from the keynote at Apple's Worldwide Developer Conference had a one-line body: iPhone competitors, it's over.
It doesn't excite me to say that. I don't like seeing any one player rise to lord of the realm. At least theoretically, a cornered market isn't good for buyers. But make no mistake, Apple entered the mobile market to corner it, and at this moment it's largely unopposed. If you wonder how Apple got there so fast, consider the zeitgeist that Apple tapped: Most wireless customers are sick of their wireless carriers. After years of overbilling, lousy support, spotty coverage, being locked out of bargain rate plans available only to new customers, and worst of all, being stuck with carriers' anemic hand-picked catalogues of devices, customers wish wireless carriers would just go away.
That's what Apple did. iPhone makes wireless carriers go away. At first, carriers went through the motions of negotiating with Apple to retain ownership of subscribers. But then a funny thing happened. The most possessive of all carriers, AT&T, discovered that it doesn't really like taking care of customers (surprise!). iPhone gives AT&T the benefits of customer ownership — rate plan lock-in and term contracts — with none of the hassle of support, advertising or plan competition. Word got out that AT&T likes it, and Apple quickly knocked over carriers worldwide with its simple "lean back and get paid" plan.
At least in North America, wireless carriers are completely helpless when it comes to services. They all spend fortunes trying to create mobile internet and media services so appealing that they'll lure people away from other carriers. Apple tells carriers that for just one model of phone, they can skip trying to cobble together a bundle of carrier-unique services.
Apple gives AT&T permission to send you a flat rate bill, with the only allowed unit charges being for text messages. The rate plan is expensive, but there is no fine print. You're also paying for Apple's helpdesk, which is shockingly helpful.
If a handset maker came to me looking for advice on competing with iPhone, I'd offer a place to start. Break wireless carriers' blockade on handsets. Release new handsets worldwide, simultaneously. Opt out of the charade that carriers have to validate individual devices on their networks before allowing users to buy them. With high-end handsets, customers will welcome the freedom to choose the handset first and the carrier second. Carrier and device choice will be powerful competitive weapons.
Don't be so price-sensitive about your handsets. People will pay $399 or $499 for a feature-rich handset for the privilege of owning it free and clear.
Get your developer program act together. If you can't create the tools and the community you need to build the kind of vital third-party mobile app market that Apple is constructing for iPhone, find and fund some champions in the open source realm. Get devices out to developers; even refurbs would be welcome. Most mobile developers don't have enough devices against which to validate their code.
Expose the advantages of your platform at a high level. For example, iPhone does not have the ability to run simultaneous applications or to open arbitrary TCP or UDP sockets over a wireless connection. Consider the applications that these limitations make impossible, and show that your platform makes them available.
The lowest-hanging fruit for iPhone competitors is savvy, self-sufficient users of high-end handsets and the developers who want to code for those users. That's not the whole roadmap, but it's a place to start. Believe me, the market is waiting.

[]