virtualisation

virtualisation - News, Features, and Slideshows

News

  • Revera adds MS HyperV to offerings

    New Zealand datacentre and hosting company Revera has added Microsoft’s HyperV virtualisation as an alternative offering to VMware.

  • BNZ ready to retire 500 servers

    Virtualisation is the name of the game for the Bank of New Zealand, which expects to phase out around 500 servers after purchasing two high-end Z series computers from IBM.

  • Microsoft attracts 1,000

    Around 1,000 people turned up for Microsoft's Launch Wave event in Auckland last week, promoting new versions of Windows Server, SQL Server and Visual Studio.

  • Virtualisation's secret security threats

    Almost any IT department worth its salt is deploying virtualisation technology to reduce power usage, make server and OS deployments more flexible, and better use storage and systems resources. But as virtualisation technology gains in popularity, it may bring with it new risks, says Don Simard, commercial solutions director at the US National Security Agency. At the same time, virtualisation technology may bring new protections, he says.

  • 3Com adds virtualisation to routers

    3Com has added a virtualisation feature to its routers via a partnership with LineSider Technologies, a provider of policy-based network infrastructure control and management products.

  • Virtualised security: the next frontier

    Companies are adopting virtualisation technologies at a faster and faster rate. They are virtualising servers, desktops, storage, networks. But one aspect of infrastructure has been lagging — very few companies address the growing demand for virtualised security.

  • Virtualisation shakes up backup strategy

    Virtualisation is causing customers to rethink their backup strategies, using technology that combines traditional backup with techniques unique to the virtualised world.

  • Self-aware virtualisation would be a blessing

    Organisations that leverage virtualisation as a means to provision and reallocate pooled resources still face the challenge of gaining an intimate level of knowledge of application behaviour and runtime requirements.
    It's a pity, isn't it, that operating systems and virtualised infrastructure solutions can't just know what applications need and allocate resources to fit, instead of requiring a lot of observation and scripting.
    Virtual infrastructure will undoubtedly take on ever-smarter heuristics for automating the distribution of computing, storage and communications resources. But setting this up as the only alternative to scripted agility presumes that software within a virtual container will always play a passive role in the structuring and optimisation of its operating environment.
    I submit that to extract ever more value from virtualisation, software must take an active role. However, I still hold to my original belief that software, including operating systems, should never be aware that it is running in a virtual setting. I want to see software, even system software (an OS is now an application), get out of the business of querying its environment to set a start-up and, worse, a continuous operating state. Doing this severely limits the ability of tasks to leap around the network at will, because an OS freaks out if it finds that its universe has changed in one clock tick. In the least disruptive case, if its ceilings were raised, the OS instance (and, therefore, the mix of applications running under it) would take no advantage of the greater headroom afforded by, say, a hop from a machine with 2GB of RAM to one with 32GB. So how can software be a partner in the shaping of its virtual environment without trying to wire in awareness of it?
    Clearly, software must be able to query subordinate software to ascertain its needs. The technology exists now to do this at start-up. When commercial software, or software written to commercial standards, is compiled, optimisation now includes steps that give the compiler a wealth of information about the application's runtime behaviour.
    One is auto-parallelisation. This stage of optimisation identifies linear execution paths that can be safely split apart and run as parallel threads. That's some serious science, but the larger the application is, the more opportunities there are for auto-parallelisation, and on multicore systems, the win can be enormous. The analysis that a compiler must perform to identify latent independent tasks could go a long way towards helping a VM manager decide how an application can be scattered across a pool of computing resources. If the ideal virtual infrastructure is a grid, then the ideal unit of mobile workload is the thread. If the compiler finds that an application is monolithic, this information, too, could be valuable, signalling that a process can be moved only as a whole.
    I'm more excited about two technologies that apply runtime analysis to the goal of optimisation. A two-step optimisation technique involves compiling the application with instrumentation for detailed runtime profiling that produces a detailed log of the application's behaviour. This log, plus the source and object code, is pushed through the compiler a second time, and the resulting analysis creates potential for optimisation bounded by only the intelligence in the compiler.
    If this intelligence were available at runtime, then a virtualisation engine wouldn't need to wonder so much about whether a process, thread, block of memory, open file handle or network socket could be safely relocated. The kind of surprises that complicate planning and automated reallocation of resources would be significantly reduced.

  • It's time to install virtual throw-away PCs

    I give up. You should too. It’s time to stop trying to secure users’ web browsers, and instead just throw them away. We can’t stop users from clicking on the wrong links or going to compromised websites. We can’t eliminate drive-by worm infections or block zero-day rootkits.

[]