The ABC of Lock-In

There have been a lot of discussions lately about a topic I find extremely interesting: vendor lock-in.

Multi-hypervisor is a discipline where you can apply the high level ranting below but you can really apply it to pretty much everything in IT.

I started this blog post writing a couple of pages (as usual) and then I thought no one would care to read it (how can I blame you?). So I summarized it in a few pictures. A picture is worth a thousands words. Always.

So the story goes like… you (the customer) start with A and you build or buy an ecosystem of people, tools, knowledge, programs, scripts (yeah A has APIs) and a lot of other things you need to do to fully exploit the value of A.

You (the customer) are happy but then comes vendor C to your door and tells you that you are locked in into A. “It isn’t so easy to move away from it given all the investments you have done” he says. “Imagine if A was to apply a vTax at some point: God forbid!” C goes on. C tells you there is B now which is good and cheap and you can adopt both A and B so you are not locked in into either. “Let C manage them for you transparently” he says. And this is what happens (in theory):

Yeah, all of a sudden you (the customer) find out that (2 years and 2M$ of professional services later) you are… locked in into C. Imagine now if C was to apply a cTax…. God forbid! You would need to move to D which is cheaper and the story goes on and on. What’s your business? Bank transactions? Shoemaker? Doh I thought you wanted the infrastructure to disappear not become your core attention.

If you thought that this was the end of a sad story there is more. Actually it gets a lot worse than this. It turns out that (2 years and 2M$ of professional services later) you can actually only send “heterogenous” alerts (such as <the disk is full>) to operators in the middle of the night and perhaps present a web interface to a user to power on and off a VM on both platform A and B. Oh and did I mention that when A and B delivers a new version of their platforms you need to give C another good 2 years and 2M$ to “adapt it”? Ok now I told you.

You thought this was the end didn’t you? Well not quite, there is even more:

Since you can only send “the disk is full” type of alerts and provision a VM from a portal (which is neither multi-hypervisor management nor IaaS cloud by the way) you have to build another ecosystem for B similar to what you built for A, essentially doubling your past efforts (which is the reasons for which many people argue that a multi-hypervisor strategy is inefficient).

Can it get any worse than this? I can’t think how.. however if it can, it will. Be sure.

Tip1: I have seen these things. First hand. You have full rights to not trust me and think I am biased now though. That’s ok.

Tip2: In the interest of time (I’ve got work to do too) I exaggerated to make a point. Apply your common sense. Look at the forest and not at the tree in this post. I was also having some fun with some of you. You know who you are.

Discuss below if you want. I am running out of time.

Massimo.

Update: reading the comments below I am starting to realize there is a chance this post gets misread and misunderstood. I genuinelly believe there is a difference between “being able to use both A + B as loosely coupled platforms” and “using C to avoid lock-in and managing multiple platforms as one”. This post was meant to say that the former is doable but can be inefficient, while the latter is just a unicorn thing.  More in the discussions underneath.

37 comments to The ABC of Lock-In

  • Kookaburra’s eh?

    Well Massimo, I think your explanation only works if you buy into the idea that all of these must be centrally managed. I believe you missed my point about the way things are often deployed in enterprise IT, which is more of siloed approach. Rarely do we see any large enterprise go “all-in” on a technology. Virtualization is a tech that gets adopted over time. Different IT silos with their own budgets, cycles, and needs make purchasing decisions based on what is most useful and cost effective for their workloads. Your assumption is that there needs to be some uber-console (one ring to rule them all) in order for all of this to make sense, this creates all sorts of multi-tenancy problems that are likely unbudgeted for and all together undesirable to all but perhaps some centralized IT department who thinks they own everything because they report directly to the CIO. In reality, each of these silos are perfectly content to implement virtualization their way on the hypervisor of their choice. Prior to vTax VMware had an affordable enough option for everyone to buy into, now with vRAM it becomes a lot more expensive to predict costs for some workflows, and IT departments are becoming more interested in less expensive solutions for workloads that don’t need all of vSphere’s bells and whistles.

    Sun used to own the datacenter with Solaris. Microsoft used to own the desktop with Windows. You can choose to be a salmon and swim against the tides of change, or you can accept the fact that cheaper alternative solutions exist and embrace them holistically.

  • John C

    This seems so much true about a product called VMware vcenter Operations management Suite, which can manage heterogenous platforms (like get data from SCOM) and is priced at $34,250 for 25 VMs. How is that for a vTAX?

    Moreover, if customers choose VMware vSphere for their hypervisor, VMware will sell them Cloud Foundry in the future, which is based on (guess), vSphere, and vFabric, so now you will be locked into now only IaaS, but also PaaS with VMware and VMware can squeeze with crazy pricing mechanisms like memory, density, cores, and such. Great article on how bad a lock-in can be with VMware..

    • Massimo

      John, first thanks for coming in. Second, it would be nice if you could say your affiliation.

      I have never said vCOps is a tool to “manage” multiple environment. Nor I said with vCops you can build a cloud. vCOps is a perfect example of a tool that does multiplatform “monitoring”. Is it expensive? May be, but at least it does more than reporting “the disk is full” (hopefully). At the end of the day If it is too expensive customers won’t see the value for the money and won’t buy it. Let’s move on.

      Re CloudFoundry and vFabric oh man… I suggest you do your homework before writing.

      From Steve Herrod: http://blogs.vmware.com/console/2011/04/cloud-foundry-delivering-on-vmwares-open-paas-strategy.html
      “Cloud Foundry can be deployed in public or private clouds. It runs on top of vSphere and vCloud infrastructure but can also run on top of other infrastructure clouds. Our partner RightScale today is demonstrating the deployment of Cloud Foundry on top of Amazon Web Services. Because of the open architecture, it could also be implemented on top of other infrastructure technologies like Eucalyptus or OpenStack.”

      Try it yourself (without having to pay any vTAX!): http://blog.rightscale.com/2011/04/12/launch-vmwares-cloudfoundry-paas-using-rightscale/

      Sorry John, try again.

      Massimo.

  • Anthony

    Massimo, I like your description of the evolution of IT and the impact that competing vendors have on the big picture. Imagine if “A” acquires “B”, Voila!!! Locked in again.

    Or consider that the customer is acquired by a larger company that demands “B” or “C” are managed by “A”.

    There are so many ways this can go. The important thing for IT leadership to realize is that there is no insurance policy against vendor lock-In.

    When we buy a new car, we don’t purchase a second electric car in case the cost of fuel rises, we commit to one of the other. And, there isn’t an insurance policy around that will bay the difference in fuel costs from the day you purchased.

    This is what evaluations, pilots, prototypes, and IT design is all about. Building a solution that works for the business, one that gives the business an advantage in it’s respective industry. It is just not realistic to spend twice (or more) the amount required to account for all possible IT industry contingencies.

    This is also the main goal and objective for OPEN standards and standards committees (vs. DeFacto standards or proprietary technology).

    This is also a key tennet of cloud computing (well, some clouds), where portability and choice of provider mitigates the risk of one vendor from having too much control over the destiny of a company’s IT design.

    The IT consumer community seems to have forgotten how to demand these open interoperability standards to ensure they have choice and true competitive value. And, oh how so many vendors love that.

  • John C

    Clarification- Doesn’t this still require vFabric? I didn’t go through all the details in your blog link, but I understand that Cloud Foundry requires vFabric as the application layer.
    Can you really have an open PaaS if you have a closed Middletier? It seems smoke and mirrors. And the thing I fail to understand is that why developers would learn a new programming model and rely on unreliable third party hosters to run their apps. Some of these hosters might just disappear in the future. What happens then? Can VMware guarantee the SLA and availability. Rememember in the end, it is quality vs. quantity. Customers dont care if Cloud Foundry runs on 5000+ hosters, what they need is a predicatble cloud Model. with VMware it is simply not possible!! Great vTaxes- Density and Memory..

    • Massimo

      John,
      vFabric isn’t “a product”. It’s a moniker/umbrella that includes dozens of products (many of which are open source). I don’t work (directly). CloudFoundry doesn’t require “vFabric” in the sense of a closed source licensed “proprietary” product (which is I believe what you intended to mean). CloudFoundry is a complete open source project that you can find on github (https://github.com/cloudfoundry). CloudFoundry use some of the open source technologies that fall under the vFabric moniker but you don’t need to “buy” anything if you want to use it. Nothing like a “closed middletier”.

      Next you bring on the table another interesting topic which is “where would you run your app”. We figured out how to run it at 5000+ hosters, in your IT and on your laptop (Micro-CloudFoundry). If that isn’t enough flexibility for you then I surrender. I doubt we can do any better. Likely this excites the remaining 6B people on planet earth.

      I am not sure we force developers to “learn a new programming model” either. We support (and the strategy longer term is to support) the most commonly used application frameworks that developers have selected to use including .NET (yeah we are very “proprietary” as you can tell). Regardless all this, developers tend to use new programming models not because we force them but because they choose to. If that wasn’t the case we would be talking about RPG and Cobol still.

      Massimo.

  • Pete

    Look, we all know that vendors would love as much lock in as possible, but customers have been burnt over the years so a much more wary. Everyone knows that VMware needs to attach as much product to its core vSphere product which is still VMware $$$ are being made. Customers will begin to introduce other hypervisors into their dev/test environments to build other skills in their IT teams and to keep VMware honest on pricing in their prod environment, with the “threat” of moving to other platforms. Maybe to at least get a better deal on their next ELA resign!

    Pete

    • Massimo

      Pete, and I have precisely nothing against that! I know how the world works. Trust me. The message I wanted to give in this blog post is that it’s ok to do A and B as separate loosely coupled things (perhaps tied together by high level monitoring systems). IT loses in efficiency but the Purchase Office wins in negotiation power. This practice is as old as the world. Some organizations will use it as a weapon for a better discount, others may be more serious about it for other reasons. For example I totally agree with some follow up twitter discussions that SPs may probably need to do more than just A simply because they may want to offer compatible platforms for customers that have chosen different platforms (some may chose solely A, some may chose solely B for efficiency). What I wanted to clear up is this illusion of “the single pane of glass” and “no lock-in” that C brings on the table. Yeah.. and then you wake up in the morning and realize it was just a dream while the mobile starts ringing at 8AM because “there is a problem”. Ok onto another business as usual day. Dreams are over.

      Thanks.

      Massimo.

  • Massimo, as always it is a pleasure to read your posts.

    People it is time to wake up and smell the dirty chai. Vendor lock in – is exactly what each and every vendor wants.

    Cisco/Juniper/ wants you to buy more of their products – and only theirs – so that you buy less of their competitors.

    IBM/HP/Dell/ want you to buy more of their Servers/Blades/etc. so that you buy less from their competitors.

    It is no different in the Hypervisor market – you can apply the same logic just substitute the vendor names with VMware/Microsoft/XEN/RH …

    There will ALWAYS be an advantage towards the tools and technologies provided VMware that can manage/monitor/orchestrate/etc. the products they sell. They will never be able to manage their competitors products as well as they do their own. The same goes for Microsoft/Xen/RH (or any other vendor for that matter).

    So yes VMware want lock-in, Microsoft want lock-in – Dell/IBM/HP as well. The only one who does not want lock-in is the customer – because of the reasons stated above in the comments before mine.

    But as you said, if any company comes in and says I can solve the problem of you being locked into a single vendor – then it will only be a matter of time (as you said) that you will become locked-into the product you just bought.

    A multi-hypervisor datacenter is something that will become a reality – sooner rather than later.

    It is just that people do not understand – changing a platform is not as easy as changing the company that supplies the paper for the photo-copier in the office. There are so many more hooks into the way you business ties into this platform that such a change – will be significant, a lengthy process, painful and most of all – NOT CHEAP!!!!!

    What I the customer) should now thing of – and start to plan for – is how to prepare myself for this new reality – and how to make absorption of these new platforms into my datacenter as smooth, as quick and as cheap as possible.

    The search for the holy grail of the “one tool to manage them all” is just that – a myth – there is no such tool (at least not yet) – and if ever one does come around – beware of the vendor lock-in once more.

    Thanks for a great read!!

    • Massimo

      Thanks Maish. I am glad you captured the spirit of my post.

      This is a great discussion. I am very intrigued by your statement “A multi-hypervisor datacenter is something that will become a reality – sooner rather than later”.

      Going back to the update in red I did earlier this morning, I am not religiously opposite to the concept of having two or more platforms (ie hypervisors in this context). While I claim this adds complexity (and you do too) and creates inefficiencies… you seem to have a strong opinion that this will happen, regardless. I would envision a future where customers may either opt to have one (for consistency, efficiency and an overall better experience) or two (for purchase department policies perhaps or other reasons). I am curious to hear why you rule out completely option number 1 and only expect number 2 to happen.

      Thank YOU for a great comment.

      Massimo.

      • I don’t completely rule out #1 (going for one and only one platform). Operationally it is the right thing to do.

        The Admins that manage the environment are constantly being bombarded, from upper management, to cut costs. And when the CEO/CIO is invited to a workshop by Microsoft and he hears that they can cut their Licensing costs by 10, 20, 50 , ?? percent (if that is true is WHOLE different discussion..). They then come back to the Admin and says I heard X, Y and Z.

        Not always is it possible to explain why the change in licensing cost is not accurate, why it will not save money, why the feature set is not comparable, what this kind of a change will entail – also from a cost perspective.

        The bottom line could come down to this. Even it only saves 10% it still saves. Even if it means it that the administration becomes more complex – because of the introduction of a new platform – it is perceived as part of the Admins job to make it work even if it is harder.

        That perception is not always the way it should be.

        Not everything will need to run on a Rolls Royce of a platform, because it is not important enough, because it is new, it can be for a number of reasons. That is why a second platform – with a different feature (and maybe a lesser one as well) set will be considered – even if it introduces additional complexity.

        Will running things on multiple platforms save licensing costs – most probably.
        Will running things on multiple platforms save introduce more complexity – definitely!
        Will running things on multiple platforms save money on the bottom line – I am doubtful.

        But not everyone sees things the way I do. (if they did – phew… life would be so boring – but so much simpler :) )

  • This whole vendor lock-in argument is rubbish. If vendor lock in was such a huge problem then the only hyper visor anyone should be migrating to right now is open source Xen (not XenServer). Yes VMware wants you locked in, but so does Microsoft and Citrix and others. Massino hit the nail on the head. As long as your multi-hypervisor management tool is by a for profit vendor, you still have lock in. And if the vendor of the management tool is also one of the hypervisor vendors, look out man! I also think this post ignores a very big issue which is one that I’ve been ranting about on Twitter. Who wants two hypervisors? To me, two hypervisors means two skill sets to support, two different server build processes, two times the number of abnormal things that can go wrong during patching, driver upgrades and OS upgrades, two sets of DR documentation, etc etc. Now I realize people are doing this today. But if you’re doing it because of pricing, then renegotiate with your vendor. If they won’t budge then run both for a short period until you transition over and cut the cord. Nothing says “Hey vendor, you f’d up” than not buying any further product from them. If you play this game of half our stuff on A and half on B you never send a strong message to anyone. Multi-hypervisor in my mind is a transition story. Period. Anything other than that and it’s just plain silly.

  • I think the posts above have a lot of useful content, so I won’t go over some of those points again.

    I’d add one very simple one though, which expands on Shawn’s post. If vendor lock in was the primary issue everyone would go for a completely open source product, however the obvious reality is you make a compromise between lock in and features that you need. This isn’t just for hypervisors, but for any software (or hardware in fact!).

    This discussion is mainly around internal enterprise usage, and my viewpoint is from what service providers and what they should support, where I think a continuing multi hypervisor strategy *can* be valid. It isn’t always, again depending on what their offering is to their customers, but for freedom of choice for now and in the future it can be a sensible option.

    Generally speaking I believe the higher up the stack the lock in exists (if it exists) the more flexible the overall offering you can deliver underneath, provided the top level has been designed to be agnostic to the levels underneath it.

    For an extreme example if you have a piece of hardware that can only run one hypervisor, that can only be managed by one piece of software, this is less flexible than being hardware and hypervisor agnostic. Using a piece of software that can manage more than one hypervisor, even if only used to manage one to begin with can thus give you potential advantages.

    All in my (slightly biased) humble opinion of course.

    • Massimo

      Thanks Tony for chiming in. This is a much better way to communicate (at least it’s not limited to 140 chars). You make good points, although some of the assumptions are a bit stretched. I agree that the dynamic at an SP may be slightly different than those of an Enterprise. We may not agree on the reasons but there may be indeed different dynamics.

      > Generally speaking I believe the higher up the stack the lock in exists (if it exists) the more flexible the overall offering you can deliver underneath…
      This is the theory which I challenged in my post

      >provided the top level has been designed to be agnostic to the levels underneath it.
      And this is the big assumption which I challenged in my post and that makes the theory above not practical.

      To your example, I could see someone arguably saying that, similarly to how A (or B) can support HW from vendor 1, 2, 3 and 4 transparently.. C may be able to support different HV from vendor A and B.

      Well, I have been working both in the first context (in my previous IT life) and I am seeing the second context now. I can tell you, with an incredible amount of confidence, that normalizing heterogeneous hypervisors is between 4 and 5 orders of magnitude more difficult than normalizing heterogeneous x86 servers.

      This latest statement would also require a better definition of what a “hypervisor” is. We tend to use that word regardless whether we use it in the context of KVM (a linux kernel module) or in the context of vSphere+SRM+vShield+…. (or the MS stack for that matter). The injection point into a stack (A or B) is important but rarely discussed.

      > All in my (slightly biased) humble opinion of course.
      I like that. Same here.

      Thanks.

      Massimo.

    • Tony said “however the obvious reality is you make a compromise between lock in and features that you need”

      That’s exactly my point. If you need XYZ features, pay for them. If it costs too much, find a different mature hypervisor that has what you need for less cost and move to it.

      Aside from the SP market, I just don’t see the need to run multiple hypervisors. It’s silly.

      Shawn

  • >Well, I have been working both in the first context (in my previous IT life) and I am seeing the second context now. I can tell you, with an incredible amount of confidence, that normalizing heterogeneous hypervisors is between 4 and 5 orders of magnitude more difficult than normalizing heterogeneous x86 servers.

    Certainly not going to argue with this, having built comparable functionality across Xen, KVM & VSphere for our orchestration product, with more to be announced soon, I can validate that point! :)

    >This latest statement would also require a better definition of what a “hypervisor” is. We tend to use that word regardless whether we use it in the context of KVM (a linux kernel module) or in the context of vSphere+SRM+vShield+…. (or the MS stack for that matter). The injection point into a stack (A or B) is important but rarely discussed.

    Kinda hits the nail on the head of the whole argument. Having a platform managing KVM & Xen is quite a different proposition from an entry point to VSphere for example. (Our platform manages the entire stack when it comes to Xen & KVM, including booting the hypervisor however we talk directly to VSphere rather than ESXi for VMWare, and so of course it acts differently under the hood).

    • Massimo

      And that is where things start to become complex to compare like to like.

      If we consider the VMware, MS, RedHat, Citrix stacks to be at the level of A in my rant and assuming you are using the vanilla open source KVM and Xen code… it looks to me you are building something on the A level as well. You just happen to use more hypervisors (and I really mean hypervisors here) whereas the others are just using one (what MS is doing re vSphere support requires a whole brand new post). In other words you don’t appear to be a C (which also usually claims to be able to manage unix, mainframes, AS/400, network, storage, and every single component you may imagine to be sitting somewhere in your data center).

      If that is the case (you’ll disagree I guess) I can hardly see a customer being worried to be locked in into ESXi, Hyper-V, KVM or XEN… they usually “fear” to be locked in into VMware, Microsoft, RedHat, Citrix…. (and now you!) based on the value and richness of features each of them provides on top of an hypervisor (hypervisor as a piece of software that can run two OSes on one physical server).

      Warning: expect C to be willing to manage you as well. ;)

      Massimo.

  • Lance

    A more general case is vendor C saying “I will manage your complexity by adding a simplifying layer on top”, and 2yrs and $2M later there’s now yet another layer of complexity that has to be monitored, maintained, and managed. The tricky bit is to simplify by ELIMINATING layers, which has the side effect of eliminating expertise and expense. Sometimes it has to be done by eliminating choice, but that often turns into a better long-term strategy.

  • Turn back time for 7 years. And replace A with ${Hardware-Vendor A} and B with ${Hardware-Vendor B}. A cool company C provided a virtualization layer to abstract the customer’s workloads from the actual details of the underlying hardware.
    So the customers were happy, because they were not locked-in anymore to A or B.

    Nobody cared about a possible lock-in to C at that time, it was just too amazing to have freedom of choice between different Hardware vendors, along with a lot of other cool features for availability, resource management…
    (Did it take 2 years & 2M$ to implement server virtualization?)

    Now, 7 years later, discussions rises (finally) about vendor lock-in for the virtualization layer. The solution seems to be easy, just add another layer above (again), that virtualizes your management processes from the actual details of the underlying hypervisor.

    For this you can go for the “raw” way using API-virtualization tools like Layer7 or Vordel. Or you can choose one of the more “IT process manager friendly” orchestration tools like vCenter Orchestrator (vCO, without ps!), Flexiant (cheers, Tony :-) ), bmc, Opalis (it was rebranded these days, wasn’t it?), HP Ops Orchestrator, ……
    Then your IT processes are independent from the actual underlying infrastructure.
    And your free from hypervisor lock-in.
    (It might again cost 2M$, but in case of vCO I’ll try to help you out in 1 year :-)

    Just wait another 7 years, and the discussion will rise again about orchestration-layer lock-in.

    In software design you try to avoid such lock-in with the principle “Program to an interface, not to an implementation”. But for our topic it’s hard to do this, because there are no vendor independent APIs. (Yes we have an OVF format, OpenFlow, … but they are just limited subsets of the vendor-specific implementation with all features).

    My personal consequence: What’s that bad about a lock-in? With all that out-of-the-box solutions for a complete infrastructure spreading out everywhere (vBlocks, vPlex, …) I think that’s just exactly what the customers want. It still may cost 2M$, but (if you can trust the presentation) the implementation is just like plug-and-play.

    Cheers,
    Joerg

    PS: 7 years ago a small swiss company named Dunes had an orchestration product with plugins for VirtualCenter 1.x and MS Virtual Server 2005 (see http://web.archive.org/web/20050412161043/http://www.dunes.ch/vso.htm ). I wonder if this plugin {still exists in a well closed and hidden safe | is being updated by some onknown developer team in a dark room without windows} in Sofia… ;-)

    • Massimo

      Joerg, please read the other comments. Re your parallel of A and B being hardware….. I won’t repeat what I had to say… search in the comments for the word “HW” and see what my (practical) thought was.

      Massimo.

  • Terry Jones

    Massimo – I have been saying this for years (without the wildlife pictures!), +1!

  • Michael

    I do not agree. From math you should achieve 2!
    That is my mainframe + storage is form math the best system. Two compnents to combind = 2! -> Simply
    At the moment the industrie is do the multi vendor hypervisor marketing.
    But it is already fact, that the hypervisor ends in a processor (intel/arm).
    The hypervisor is only needed to move from one hardware vendor A to vendor B,
    without reinstallation cost for the application.
    At the monemt we to many APi’s, for maths very bad.
    Look at business process reengineering and kill all non value tasks.
    http://en.wikipedia.org/wiki/Business_process_reengineering

    • Massimo

      Michael, I am not sure I understand what your point is. But “hypervisor is only needed to move from one hardware vendor A to vendor B, without reinstallation cost for the application” sounds a l-i-t-t-l-e bit narrow of a view IMO.
      Thanks. Massimo.

      • Michael

        Massimo, if you look from a mathematical view at a datacenter, then if you do a multi vendor strategy this will change the factorial and then you have a exponential curve, more complexity.

        Formular for factorial : http://en.wikipedia.org/wiki/Factorial

        Layer 9: Userpart (Software dynamic) n!+1+1+1+1+1
        Layer 8: Application (Software dynamic) n!+1+1+1+1+1
        Layer 7: Guest OS (Software dynamic) n!+1+1+1+1+1
        ————————————
        Layer 6: Hypervisor (Software dynamic, mobile element) n!+1+1+1+1
        ————————————
        Layer 5: Server Network Storage (Hardware static element) n!
        Layer 4: Rack space (Hardware Sstatic element) n!+1+1+1
        Layer 3: emergency power Diesel (Hardware static element) n!+1+1
        Layer 2: floor space (Hardware static element) n!+1
        Layer 1: Building (Hardware static element) n!

  • Michael

    I forgot to say, that through the combination of the layers, the multiplication of factorial layers, if any layer changes from n=1 to n=2 (simplest form of a multivendor strategy) the factorial of this layer changes (in this case from 1 to 2 by 100 percent) .

    E.G look at the theory of sets.

    Combined two elements (computer and network) in a single vendor strategy.
    Set 1 Computervendor (HP) -> 1 elements -> n=1 -> n!=1×1=1!=1
    Set 2 Networkvendor (Cisco) -> 1 elements -> n=1 -> n!=1×1=1!=1
    Combination of set 1 x set 2 = 1! x 1! = 1 x 1 = 1 combinations

    Combined two elements (computer and network) in a multi vendor strategy(two vendors).
    Set 1 Computervendor (HP, DEll) -> 2 elements -> n=2 -> n!=1×2=2!=2
    Set 2 Networkvendor (Cisco, Juniper) -> 2 elements -> n=2 -> n!=1×2=2!=2
    Combination of set 1 x set 2 = 2! x 2! = 2 x 2 = 4 combinations

    Combined two elements (computer and network) in a multi vendor strategy(three vendors).
    Set 1 Computervendor (HP, DEll,IBM) -> 3 elements -> n=2 -> n!=1x2x3=3!=6
    Set 2 Networkvendor (Cisco, Juniper,Arista) -> 3 elements -> n=2 -> n!=1x2x3=3!=6
    Combination of set 1 x set 2 = 3! x 3! = 6 x 6 = 36 combinations

    As you can see it increases exponential. Upps and that is in the same layer 5, not between the layer , for example layer 7 and 8.

  • Michael

    The picture of on page 3 of the presentation
    http://imexresearch.com/reports/EXEC%20SUMMARIES/IMEX%20-%20VZ%20Exec%20Summary.pdf
    I like this very much. “Choas in the Enterprise”

  • Yitzhak Bar Geva

    Without revealing my real age in octal notation of course (Heaven forbid in decimal or hexa) let’s suffice by saying that I’ve lived through a few of these paradigm shifts already, from the days of IBM and the Seven Dwarfs when software was black magic and IBM held the wold of computing by the (well, never mind) like Cisco has in networking through the past decades.
    May I share some thoughts relevant to this bubbling thread:
    1. Without realizing it, all the participants in this thread and their many readers are Champagne-bottle poppers in the Open Source victory celebration, which until not so long ago was something that “serious” IT managers sneered at, being relegated to the realm of geeks, but certainly of no value to “responsible” enterprise managers who had a job to get done. They wouldn’t touch open source software with a ten foot pole! Go back up to the top of this thread and imagine that OpenFlow/SDN were akin to Linux of 10 years ago. How many of you would have so much as bothered entering into the conversation? Wouldn’t it be reasonable to conjecture that Open source is not going to stop dead in it’s tracks, but rather to keep plowing ahead so that it’s relative weight in the mix five years from now might be several times over what it is today?
    2. We’re all focused on the big players. By players, I mean customers. Previous quantum leaps (read: paradigm shifts) were characterized by movement into smaller and smaller hands, but order of magnitude higher numbers of them. With today’s Linux hosting hundreds or even thousands of LXC-contained VM’s, hosting today’s full-blown data center on a smartphone is around the corner.
    3. Following 2 (above) don’t bother racking your brains over “why would anyone in his right mind ever want to host a data-center on his smartphone”. That’s the way it always goes. Remember when PC’s were toys? Who could ever have dreamed of the applications running on today’s smartphones a few years back? If it can be done and for cheap, it will be. The thing to keep in mind is that Stallman’s vision has become reality. Power of creation and innovation is now in the hands of the masses and they’re going to forge ahead whether the benevolant godfathers of Cisco, IBM, Microsoft and all the others like it or not. If you’ll allow me to add my own vision: It won’t be dull!

    • Massimo

      Yitzhak, are you sure you commented on the right thread? ;)

      • Yitzhak Bar Geva

        Q: “Yitzhak, are you sure you commented on the right thread?”

        A: Yep!

        The issue is over whether, and to what extent vendors will be able to lock their customers in. The road to the answer stems is through reviewing the patterns of previous paradigm shifts, characterized by the ever-growing influence of Open Source and “People Power” (get ready!).
        All of us with an iota of honesty, should put our left hand on the Bible, raise our right hand and swear that when the first smartphones came out, we were sure that the benefactors could only have been the big carriers and that none of us (well, nearly none of us) had any inkling the world would soon be swamped with zillions of apps produced by the “proletariate”.
        Vendor Lockin isn’t going to dissipate overnight and use of those tactics is fair enough in the competative jungle. Just keep in mind that we’re in a different ball game. It’s no longer IBM and the Seven Dwarfs of the ’60s and ’70s or Microsoft and it’s Seven Dwarfs of the ’90s. The world of Networking is unshackling itself from the dark Cisco-and-her-Seven-Dwarfs-age in protocol pugatory when vendor lockin was real vendor lockin. Nowadays, the poor vendors have to sweat a lot more to lock their customers in and their bear hugs are baby-bear hugs.

  • Yitzhak Bar Geva

    Right Massimo. Only that today’s lock-in is a whole lot less locked-in than before. No one is offering a quatum leap from total subjugation (Cisco/IBM/Microsoft, etc. each in it’s time) to the utopian world of total freedom. It happens gradually. Lock-in is still with us and companies will continue using strong-arm tactics to make a buck. But see where networking is in relation to where it was and it’s safe to assume that were on the right road.

  • […] solution to avoiding vendor lock-in – even if that may have been one of the early, albeit misguided, goals.  In fact, VMware is one of the top contributors to this open source project and the real […]

Leave a Reply