Sonntag, Februar 19

IBM Adds Management Tool

http://www.byteandswitch.com/document.asp?doc_id=89262&WT.svl=wire2_1

ARMONK -- IBM today announced a new version of IBM TotalStorage Productivity Center, software that can centralize management of multi-vendor storage systems, improve administrative efficiency, reduce business risk and improve storage utilization. The software helps customers access, manage and deliver information more efficiently to enable Information on Demand and leverages IBM’s IT Service Management initiative – designed to integrate and automate IT processes.

...

Dienstag, Februar 14

Techworld: IBM's storage virtualization may migrate to the switch

Techworld: IBM's storage virtualisation may migrate to the switch
The SVC appliance could have virtualisation functions moved out
By Chris Mellor, January 13, 2006
http://www.techworld.com/features/index.cfm?featureID=2249
The Anna Karenina principle was mentioned in December, 2004, interview with IBM's storage virtualisation architect, Steve Legg. In it he said that the best and architecturally cleanest place for a storage virtualisation function was inside the storage fabric. This was partly because the fabric could better see all the storage devices than a storage array controller such as HDS' TagmaStore.
IBM chose to implement its virtalisation product, the SAN Volume Controller or SVC, in an appliance which links to a SAN fabric switch or director. Since then there has been the development of intelligent directors which can run storage applications. A standard interface is being developed for these, known as the Fabric Application Interface Standard or FAIS.
We talked again to Steve Legg and found out IBM's current thinking on storage virtualisation.
TW:What is the current situation with SVC?
Steve Legg: There are now over 1,800 SVC systems installed. We're probably the storage virtualisation leader based on a units or revenue measure.
TW: Could you remind of us your product platform choices when you developed it?
Steve Legg: We chose our in-band technology very much on the grounds that, then in 1999, it served the purpose with the hardware available.
TW: And the situation now?
Steve Legg: I'm not hung up on the platform where it runs. We need memory mips and MB/sec bandwidth. It's been designed so that we could always move it elsewhere if we want to. Now processing in the fabric switch (director) is starting to mature. Whilst we have no product plans right now, we are having discussions about how we can use that processing power in the switch.
The location of storage virtualisation won't have moved; it would still be in the SAN fabric. But within the SAN fabric you can do it in a variety of places; an appliance or the switch.
TW: As EMC's InVista does. Could you give us your view of InVista here?
Steve Legg: InVista is not out there yet; it's in beta. InVista will provide logical-to-physical translation. It doesn't cover cacheing, point-in-time copy or other copy services that SVC does. It's not trivial to do this.
If you just want to do virtualisation then do it in the switch. If you want to do more then you need more than you can do in the switch.
TW: IBM knows about what you can and can't do in switches because it has experimented with a switch-based version of SVC.
Steve Legg: Our SVC in one implementation is on two appliances that are Intel-based and in an active:active pairing. They link to the switch in a full-duplex way through four fabric ports.
Another SVC implementation has it fit inside a Cisco switch as an embedded product based on a Mips processor. Approximately nine were sold. Technically it was a complete success, commercially a complete flop.
Switch slots represent a very specialised space. The switch has a specific backplane. An embedded design in the switch is limited in scalability. Cisco and others are now putting intelligence into fabric ports themselves. We could exploit these mips, MB and MB/sec for virtualisation. (But) for copy services, etc. we can best do it still in the appliance. We could split off virtualisation functions into the switch and use the (existing) appliance for cacheing and helping out with copy services.
Scalability
TW: Steve Legg than looked at the appliance's capacity and scalability. (Previously, in-band virtualisation approaches have been criticised for introducing a potential bottleneck into the SAN fabric.)
Steve Legg: Our SVC appliance has active:active nodes; they're paired for availability. We cluster them for scalability. There hasn't been anything said about SVC bottlenecking.
The appliance has to deal with the aggregate workload coming from the hosts, not the theoretical maximum I/O bandwidth of all the switch ports or I/O controllers. A host connected to a SAN don't generate hundreds of I/Os a second. It's three or four I/Os a second because most of the time hosts are running business logic. The fan-in from hosts to the SVC can be huge.
TW: So the picture here is that a SAN fabric director (switch in Legg's phraseology) can be an effective place to run basic virtualisation but the physical space available in the switch limits the scalability and the storage application stack that could be needed. The best place to have applications that are layered onto the virtualisation function is in a connected (in-band) appliance which can scale up in power and availability as required.
We then asked Steve Legg about FAIS, and a couple of questions about Network Appliance and storage products from suppliers such as 3Par, Exanet and BlueArc.
What is your view of FAIS?
Steve Legg: FAIS is a proposed standard. We're totally committed to that being the way to do it.
TW: Can you briefly mention where NetApp fits in with SVC?
Steve Legg: NetApp filers don't connect to SVC, The NETApp gateway, the V-series has backend storage which are Fibre Channel arrays. The SVC fits there. There is overlap here. There always will be.
TW: What do think of the virtualisation approaches from 3PAR, Exanet, BlueArc and so on?
Steve Legg: Here the all-in-one environments are a limiting factor. You can't easily change the architecture if you want to split off functions. The SVC is open. You can buy whatever disk or disk arrays you like. You can move virtualisation to the switch if you like.
TW: And lastly, a quick look at storage controller-based approaches to virtualisation?
Steve Legg: If you put virtualisation and storage applications in the storage controller it's inflexible. You can't move functions easily to the fabric; it's limiting. Also suppliers will inevitably optimise components to improve performance and/or cost and thereby prejudice openness.
TW: As an architect Legg is strongly biased in favour of clean interfaces between functional levels in a stack or interfaces in a network. It's the OSI 7-layer model approach, one that has proved its enduring worth over time.
We might expect to see IBM hiving basic virtualisation off the SVC appliance and running it inside a Brocade, Cisco or McDATA director. But the indications are that it believes it can provide better storage virtualisation-based functions by running them, still, on its SVC appliance.
With customer installations heading towards 2,000 it is an approach that resonates well with a lot of customers. Many of them, if not all of them, will have Brocade, Cisco or McDATA directors. Why should they change their approach?
In one sense it will be an easy choice to make. What will the director-based storage applications be able to do now and in the future that the SVC appliance won't? What are the comparable costs and management tasks? What do the roadmaps look like? Tick the boxes and choose. To throw out an incumbent the director-based products have got to be really good. Expect a flood of white papers and analyst reports trying to prove things one way or the other.

CeBIT 2006: Open Text lädt in die ECM-Lounge ein

http://www.verivox.de/news/ArticleDetails.asp?aid=23635&pm=1

Für den Inhalt der folgenden Meldung ist nicht die Verivox GmbH sondern der Autor Open Text GmbH verantwortlich.
(box) München, 10.02.2006 - Open Text™ (Nasdaq: OTEX, TSX: OTC), der führende unabhängige Anbieter von Enterprise Content Management (ECM)-Software, lädt auch zur diesjährigen CeBIT 2006 in seine ECM-Lounge ein. Hauptthema des Messeauftritts sind die strategischen Partnerschaften des Unternehmens mit Microsoft und SAP, die in den letzten Monaten nochmals substanziell erweitert wurden und bereits erste Ergebnisse in Form von neuen Produkten gezeigt haben. Aufgrund der service-orientierten Architektur der Open Text-Lösungen können aber prinzipiell alle Datenquellen in eine ECM-Landschaft eingebunden werden. Damit bringt Open Text sowohl strukturierte als auch unstrukturierte Inhalte aus den verschiedensten Systemen und Applikationen zum Anwender, bindet sie Geschäftsprozesse ein und managt ihren kompletten Lebenszyklus. Mitaussteller in der ECM-Lounge von Open Text sind die Partner Deutsche Post AG, IBM Deutschland, Kofax und Oracle. Die ECM-Lounge von Open Text auf der CeBIT 2006 befindet sich in Halle 3, Stand C57.

...

- Auf dem Open Text Stand zeigt IBM als langjähriger Technologie-Partner von Open Text gemeinsam mit dem IBM-Businesspartner SVA GmbH, Wiesbaden, die Livelink PDMS Lösung basierend auf dem IBM eServer pSeries/p5 sowie eigene IBM Storage-Lösungen wie das Produkt „IBM ToTalStorage DR 550“. Die Lösung „Livelink for Production Document Management“ ist ein Archivierungs- und Dokumentenmanagement-System für Unternehmen, die hohe Volumina an Daten und Dokumenten effizient und langfristig verwaltet.

Donnerstag, Februar 2

Second Generation CDP

http://www.line56.com/articles/default.asp?ArticleID=7301

Continuous Data Protection, more commonly known as CDP, is the buzzword du jour in the data protection market

by Scott Jarr, Lifevault

Thursday, February 02, 2006

--------------------------------------------------------------------------------
Continuous Data Protection, more commonly known as CDP, is the buzzword du jour in the data protection market. Recent announcements by large, brand-name vendors like Microsoft, IBM/Tivoli and Symantec all tout their latest products as providing continuous data protection, so the market is categorizing all CDP products as "new."
In fact, CDP products have been around since the late 1990s. Dozens of vendors claim to provide CDP and have their own perspective of what exactly CDP entails. For an additional perspective, Google "CDP software" and you will receive nearly 200,000 results. With that being said, if you take a look at what CDP really means and do your best to avoid the marketing hype, you'll soon learn that what most vendors are offering is rudimentary and won't fully protect your business. The majority of CDP offerings claim to provide CDP, but what they are offering is continuous data backup, not continuous data protection. CDP already is into its second generation, and it's no longer about just backing up data.

...

IBM auf der CeBIT 2006

http://www.verivox.de/news/ArticleDetails.asp?aid=23086&pm=1

(box) Stuttgart, 01.02.2006 - Innovative Lösungen für On Demand Business stehen im Mittelpunkt des Auftritts von IBM auf der CeBIT 2006 (9.-15. März in Hannover). Unter dem Messemotto „Your direction IBM“ präsentiert das Unternehmen auf insgesamt über 3.500 Quadratmetern Ausstellungsfläche ein umfassendes Angebot an Business- und Infrastruktur-Lösungen. Besonderes Highlight am IBM Hauptstand in Halle 1, Stand F41/F51, ist ein Innovations-Showcase rund um das Thema „Luftfahrt“: Das Modell eines neuen Airbus A380 lädt die Besucher ein, sich Szenarien aus den Bereichen Sicherheit, RFID und Product Lifecycle Management anzusehen. Darüber hinaus zeigt das Forschungszentrum Jülich Beispiele aus dem Bereich Supercomputing. Als weiteren Höhepunkt präsentiert der Bereich IBM Forschung und Entwicklung Zukunftstechnologien wie den Cell-Prozessor und Marvel, eine neuartige Suchmaschine für Multimedia-Daten. In Halle 4, Stand A12, zeigen der IBM Bereich Mittelstand und mehr als 50 IBM Business Partner Lösungen für kleine und mittlere Unternehmen. Am Stand C61 in Halle 9 dreht sich alles um IT-Lösungen für den öffentlichen Dienst und den Gesundheitssektor. Darüber hinaus ist IBM mit zahlreichen Fachvorträgen im Communication Center der Messe und Sonderveranstaltungen vertreten.

...

Der Bereich IBM System Storage zeigt aktuelle Technologien für den Aufbau einer Information On Demand Infrastruktur. Dazu gehört die Unterstützung eines fortlaufenden Geschäftsbetriebs mit Funktionalitäten für hohe Verfügbarkeit und Disaster Recovery. Demonstriert wird, wie die IBM GlobalMirror-Technologie Daten auch über große Entfernungen hinweg sichert und wiederherstellt. Mit dem Komplettsystem IBM DR550 und Tape-Systemen zeigt IBM außerdem Möglichkeiten für den Aufbau einer Information Lifecycle Management-Lösung auf. Überdies erhalten die Messebesucher Einblick in Technologien zur Vereinfachung der Speicherinfrastruktur, wie den IBM SAN Volume Controller für die Virtualisierung von Speichernetzwerken.

...