Today is the opening day of the Apple World Wide Developer Conference in San Francisco. This morning they announced new Macintosh laptops, a significant upgrade to the Mac OS and, what everyone was really waiting for, a software and hardware upgrade to the widely popular iPhone.
In honor of the iPhone announcement I thought I would take a few minutes to write about the other phone getting all the news coverage right now: the new Palm Pre. Most of the coverage leading up to the launch of the Pre over last weekend was about how Ed Colligan (Palm CEO) lured Jon Rubinstein (the brains behind the iPod and iMac) out of retirement to take on his old company in the phone space. The coverage -- and there was a lot of it -- focused on this battle between the old leader and the new leader in the handheld / smart phone market segment.
Now I should be fair and disclose that I was a huge Palm fan throughout the 90’s. I was an early Palm Pilot adopter and ended up purchasing three or four over the course of a decade to upgrade or simply replace an older unit. (I have a bad habit of dropping my phones and PDAs, usually with a bad result.) I invested Palm apps and used my PDA for everything I could. When the Treo smart phone was introduced I was ecstatic. I owned my 600 for years. With a few hardware exceptions -- that annoying buzzing sound aside -- I was a happy user. That is until Palm abandoned me wholesale. They introduced Windows CE phones and no longer advanced the Palm OS. They were happy to take my money but were never happy to advance me along the lifecycle of products.
With that said the one thing that impressed me the most about the iPhone -- no, not the interface or multimedia or handwriting recognition or applications -- was the fact that despite owning the first generation of phones I could upgrade to the 2.0 software for free. Yes the hardware doesn’t support all the bells and whistles but Apple was going to continue to advance the technology for me. More shockingly is that they were going to advance my software capabilities for FREE. I was happy to pay for it but the upgrades (with dot revisions there have been four or five now) continue to come for free.
Today Apple announces iPhone 3.0 software. Guess what? I’ll download it when it comes available on June 19th for free. I’ll flash my first generation iPhone and get additional functionality. Guess what? I’ll be happy...
Palm will never again get my business. I can’t afford their model. The Pre may be cool but if you’re interested in buying one you should ask for a commitment for future upgrades (paid or free) before you walk out the door! Palm is the number five phone OS vendor behind Apple, Nokia, Blackberry and Android. Do you really want to take a risk that they’ll be in business long enough to provide the support you deserve for something as business critical as your phone?!
Monday, June 8, 2009
Thursday, June 4, 2009
This just in: Intel is buying Wind River Systems
Intel is buying an embedded OS vendor. Hmmm think that they’re going to start inching even closer to the system vendors? There is more money to be made in system integration than hardware?! Where have we heard that before? Oh, there will be more to write on this subject!
Wednesday, June 3, 2009
Wake Up Facebook!
So my topic for the day was going to be commoditization. There are so many things that are increasingly moving to lowest cost genericized offerings. Just saw this news article about advertising revenue:
http://www.btobonline.com/apps/pbcs.dll/article?AID=/20090602/FREE/906029995/1078/newsletter011
No surprise really considering how many options are now available for advertisers. I was going to focus today’s blog on how advertising-based business models are bound to fail and how new sources of revenue based on value add is critical. I’ll save that for another day and focus on something related but I feel particularly emotional about today:
WAKE UP FACEBOOK!!!
I can’t say that loudly enough! They have become so blinded by an investor valuation that they’re missing the real opportunity for revenue and sustainability and ultimately long term relevance. Facebook will go the way of, dare I say, MySpace -- I don’t know anyone on MySpace anymore -- or Orkut (remember them?) unless they focus on their value add and build a business model that creates value from their service.
I know, I know that many will scoff at this opinion but regardless of the success they’re having today at building an infrastructure for communities some completely unknown service or technology will come around tomorrow, next month or next year that will be cooler, hotter or just easier and will totally subvert their success eliminating any perceived market valuation.
So what is the answer?! The funniest and most aggravating thing about the Facebook service is that there is a straight forward and powerful value that has distinct revenue generating potential that has not been leveraged. The largest and wealthiest market in dire need of Facebook’s easy-to-use, collaboration and knowledge management solution is business -- the Fortune 1000. The problem with Facebook is that they’re caught in a “cool” consumer cycle that they miss the business opportunity right in front of them.
Facebook should be able to relatively easily use their existing service as the basis for a corporate SaaS collaboration, knowledge management, auditing service. I can’t speak for every corporate intranet but certainly the ones I’ve used have been a nightmare aggregation of mixed systems providing limited functionality along with horrible search, auditing and collaboration tools that never quite get optimized. Worse yet, as we’ve all likely experienced, as soon as we get comfortable with one tool a new one is introduced with an entirely different interface and capability. Facebook is terrific for creating user controlled groups and integrating IM, email, web services and digital media for sharing, controlling and collaborating. The technical challenges would be minimal (at least initially as a SaaS offering only) with a mid-market sized business customer seeing immediate and significant value from an enterprise type offering.
More importantly, for Facebook, they could charge a reoccurring subscription fee to business customers that is predictable and not insignificant. It’s not advertising-based. With their advance feature set and brand recognition Facebook could quickly gain marketshare and significant customer success.
The fact of the matter is that without Facebook in this space there has been a number of Facebook type services popping up to service this B2B need. These services include: Grou.ps, TamTamy, Jive, and Igloo. Facebook has competitive advantage today but it must move quickly to address this need before the market opportunity passes them by...
http://www.btobonline.com/apps/pbcs.dll/article?AID=/20090602/FREE/906029995/1078/newsletter011
No surprise really considering how many options are now available for advertisers. I was going to focus today’s blog on how advertising-based business models are bound to fail and how new sources of revenue based on value add is critical. I’ll save that for another day and focus on something related but I feel particularly emotional about today:
WAKE UP FACEBOOK!!!
I can’t say that loudly enough! They have become so blinded by an investor valuation that they’re missing the real opportunity for revenue and sustainability and ultimately long term relevance. Facebook will go the way of, dare I say, MySpace -- I don’t know anyone on MySpace anymore -- or Orkut (remember them?) unless they focus on their value add and build a business model that creates value from their service.
I know, I know that many will scoff at this opinion but regardless of the success they’re having today at building an infrastructure for communities some completely unknown service or technology will come around tomorrow, next month or next year that will be cooler, hotter or just easier and will totally subvert their success eliminating any perceived market valuation.
So what is the answer?! The funniest and most aggravating thing about the Facebook service is that there is a straight forward and powerful value that has distinct revenue generating potential that has not been leveraged. The largest and wealthiest market in dire need of Facebook’s easy-to-use, collaboration and knowledge management solution is business -- the Fortune 1000. The problem with Facebook is that they’re caught in a “cool” consumer cycle that they miss the business opportunity right in front of them.
Facebook should be able to relatively easily use their existing service as the basis for a corporate SaaS collaboration, knowledge management, auditing service. I can’t speak for every corporate intranet but certainly the ones I’ve used have been a nightmare aggregation of mixed systems providing limited functionality along with horrible search, auditing and collaboration tools that never quite get optimized. Worse yet, as we’ve all likely experienced, as soon as we get comfortable with one tool a new one is introduced with an entirely different interface and capability. Facebook is terrific for creating user controlled groups and integrating IM, email, web services and digital media for sharing, controlling and collaborating. The technical challenges would be minimal (at least initially as a SaaS offering only) with a mid-market sized business customer seeing immediate and significant value from an enterprise type offering.
More importantly, for Facebook, they could charge a reoccurring subscription fee to business customers that is predictable and not insignificant. It’s not advertising-based. With their advance feature set and brand recognition Facebook could quickly gain marketshare and significant customer success.
The fact of the matter is that without Facebook in this space there has been a number of Facebook type services popping up to service this B2B need. These services include: Grou.ps, TamTamy, Jive, and Igloo. Facebook has competitive advantage today but it must move quickly to address this need before the market opportunity passes them by...
Tuesday, June 2, 2009
Desktop virtualization
As a follow up to yesterday’s blog on the hypervisor revolution I wanted to spend a few cycles talking about the desktop. For the last 30 years we have become accustomed to personally managing our desktop OS. From business to consumer and back to business desktops our sense of ownership and entitlement has come at a very high cost. The cost of management, security, patching, corruptions / conflicts and ultimately the cost of the hardware upgrade (or replacement) necessary to support the next / latest version of the software we need to get our job done.
Now the idea of running your desktop OS or at least a subset of applications from the server is nothing new. The first desktops were dumb terminals running mainframe sessions. The evolution to thin clients was somewhat revolutionary in that you had your own GUI-based desktop image. However because the network wasn’t fast or efficient enough the thin client solutions were forced to make compromises that in most cases negated the benefits of moving off a traditional PC in the first place. You still had an OS image on the desktop to manage, patch and secure. In addition the customer often had a set of legacy peripherals that they were forced to scrap as they weren’t supported by the thin client. In addition many solutions had architectural issues -- single points of failure -- or unique management interfaces that they had to learn and certify. Often it was more challenging than simply sticking with the tried and true. Now, I’m generalizing and for simplicity not even dealing with application virtualization as an option. In addition many customers moved to a bare bones Windows virtualization solution from Microsoft called Terminal Services. This functions fine unless the user needs some basic things like support for sound. ;)
Which brings us back to the hypervisor. The wonderful thing about virtualization is that you can run any OS (x86-based that is) on the hypervisor on a server and deliver it via the network. The benefits of this model -- outside of centralized imaging, managing, patching, service and support -- is that you can optimize the network to a point that minimizes the technological footprint of the client yet delivers almost full desktop capabilities to an end user. Thus, in the last two years the introduction of the Zero Client. (I take some credit for that concept!) Yes a Zero Client may have some small firmware or bootware but for all intents and purposes we’re talking about a client side box that provides ports for network and peripheral access. (A place to plug in your monitor, mouse and keyboard.) There are a number of Zero Clients on the market today led in large part by Teradici’s OEM suppliers (such as ClearCube), Pano Logic and nComputing. Now for the initiated I don’t want to get in a discussion on whether Teradici or nComputing are indeed desktop virtualization plays at all considering their unique architectures or whether they (today) utilize a hypervisor backend solution -- in those two cases they don’t.
The background was important for my point: while this is revolutionary it ultimately doesn’t matter!
None of this matters for two reasons: commoditization and cheap alternatives.
Now the idea of running your desktop OS or at least a subset of applications from the server is nothing new. The first desktops were dumb terminals running mainframe sessions. The evolution to thin clients was somewhat revolutionary in that you had your own GUI-based desktop image. However because the network wasn’t fast or efficient enough the thin client solutions were forced to make compromises that in most cases negated the benefits of moving off a traditional PC in the first place. You still had an OS image on the desktop to manage, patch and secure. In addition the customer often had a set of legacy peripherals that they were forced to scrap as they weren’t supported by the thin client. In addition many solutions had architectural issues -- single points of failure -- or unique management interfaces that they had to learn and certify. Often it was more challenging than simply sticking with the tried and true. Now, I’m generalizing and for simplicity not even dealing with application virtualization as an option. In addition many customers moved to a bare bones Windows virtualization solution from Microsoft called Terminal Services. This functions fine unless the user needs some basic things like support for sound. ;)
Which brings us back to the hypervisor. The wonderful thing about virtualization is that you can run any OS (x86-based that is) on the hypervisor on a server and deliver it via the network. The benefits of this model -- outside of centralized imaging, managing, patching, service and support -- is that you can optimize the network to a point that minimizes the technological footprint of the client yet delivers almost full desktop capabilities to an end user. Thus, in the last two years the introduction of the Zero Client. (I take some credit for that concept!) Yes a Zero Client may have some small firmware or bootware but for all intents and purposes we’re talking about a client side box that provides ports for network and peripheral access. (A place to plug in your monitor, mouse and keyboard.) There are a number of Zero Clients on the market today led in large part by Teradici’s OEM suppliers (such as ClearCube), Pano Logic and nComputing. Now for the initiated I don’t want to get in a discussion on whether Teradici or nComputing are indeed desktop virtualization plays at all considering their unique architectures or whether they (today) utilize a hypervisor backend solution -- in those two cases they don’t.
The background was important for my point: while this is revolutionary it ultimately doesn’t matter!
None of this matters for two reasons: commoditization and cheap alternatives.
- The three most significant components of desktop virtualization and its alternatives are: connection brokers (software that authenticates and connects remote users), hypervisors (disaggregators of the OS) and protocols (optimized network connections). Connection brokers were commoditized two years ago. You want a connection broker you call any vendor in the space and they’ll give it to you for free. There are no companies specializing in connection brokers left. VMware, Microsoft, Citrix and all the smaller vendors provide the connection broker at no cost. The hypervisor is being commoditized as I write this blog. VMware tried to sell ESX for $999 and Microsoft countered with their hypervisor, Hyper-V, for $28. Now you can get either for free. Xen the open source hypervisor has always been available for free. Sun Microsystem’s Xen derivative, XVM is also available for free. The highest potential hypervisor, KVM will begin shipping as part of Red Hat any quarter now as well. Their business, as VMware’s and Microsoft’s is to build a business on management tools and not on the underlying technology. Lastly there is the protocol. This is where most vendors in the space are focused on creating added value and differentiation. However with Microsoft working diligently to significantly upgrade RDP (provided for free) and VMware partnering with Teradici to distribute PCOIP later this year I predict that soon the protocol will also be free. This leaves tertiary players such as Pano Logic, Sun Microsystems, Wyse and even Citrix with their robust ICA stuck between a rock and a hard place to continue to drive customer’s to pay for little additional value add.
- Devices ultimately are unimportant because the value of any server-centric desktop solution should be to deliver an optimized experience to ANY device. Yes there are benefits to a Zero Client but the installed base of PC’s, netbooks, and most importantly cell phones is vast and each has a very unique value that will never be eliminated by thin clients or zero clients. Zero clients in particular have other challenges: since they have no state they need a secure network connection. This eliminates their ability to connect outside the intranet (at least easily and cheaply). This is yet another major obstacle.
Another view of virtualization
I’ve spent the large part of the last five years looking at server and desktop virtualization. It was only a matter of time before I was going to spend cycles in this blog focused on the role, impact and future role of virtualization on datacenter IT and ultimately on the desktop. I will likely continue coming back to this subject or one of its derivations (Cloud Computing, SaaS, etc.) over the coming months.
I thought I’d start with my view of the hypervisor on IT. The hypervisor is a disaggregation technology: it disaggregates hardware from software, x86 platforms from operating systems. For those Macintosh or Linux fans out there, the hypervisor is what easily brings Windows applications to your beloved platform. For IT the hypervisor allows applications to continue to run as you migrate to new platforms or seamlessly add additional workloads to existing platforms.
My contention has long been that for the obvious utilities -- consolidation, optimization, business continuity solutions -- the hypervisor has real value but only evolutionary value. Best practices don’t fundamentally change. Architectures evolve but don’t revolutionarily change. Management architectures do, even sometimes radically change, but the tools of the trade don’t really do more than evolve.
During my tenure at VMware I was struck by the fact that there are areas (two distinctly) that are revolutionarily impacted by the hypervisor. Two areas that, at the time, VMware wasn’t only minimally invested in. These two areas -- virtual appliances and desktop virtualization -- have become, under new leadership, more highly invested in at VMware but still, I believe, entirely under invested by IT.
The first area, virtual appliances, seems to be the least impactful over the last few years. There seem to be only a few companies trying to create a business around this concept. Virtual appliances, or the use of a hypervisor by a software developer to disaggregate OS decisions for their customers, could potentially have a profound impact on software development models and dramatically change the call-to-arms between warring .NET and J2EE camps. The ultimate impact of a virtual appliance was captured, albeit only briefly, by BEA (since acquired by Oracle). Since BEA had a fast derivation of an OS (their implementation of JRocket JVM) they wedded ESX with an optimized JRocket and their app server to create a “bare metal” implementation of their software stack for Intel-based systems. No need for Windows, Linux, Solaris or any other OS. Simply install the stack with a single click and you have an optimized software solution installed and ready-to-run. BEA could pick and choose which OS-level components it wanted to optimize and deliver as part of the stack. While the utility is obvious for the customer the ultimate savings were never realized by BEA -- no longer having to choose a single or multiple development paths for different OS platforms -- as they were shortly thereafter acquired.
There are some companies out there trying to build a software development business around this technology. rPath is one that comes to mind. It’s also clear that VMware has increasingly invested in software distribution for their partners using the hypervisor in this manner. They have built a software distribution community through their corporate site focused on this. However the long term impact on the “traditional” developer model hasn’t yet developed. I have faith that at some point it will. Perhaps it needs a bit more consumer exposure?
The other area, desktop virtualization, has been an area of heavy VMware investment over the last couple of years. It’s also an area that I know quite a bit more about having spent a year in thin clients with Sun Microsystems and the last couple of years with a desktop virtualization start up (which I am no longer associated with). I have also done quite a bit of writing about this space (and will obviously continue to) -- see my very first blog entry. Stay tuned for tomorrow’s update for more on the evolution of the desktop and the impact of the hypervisor on OS delivery.
I thought I’d start with my view of the hypervisor on IT. The hypervisor is a disaggregation technology: it disaggregates hardware from software, x86 platforms from operating systems. For those Macintosh or Linux fans out there, the hypervisor is what easily brings Windows applications to your beloved platform. For IT the hypervisor allows applications to continue to run as you migrate to new platforms or seamlessly add additional workloads to existing platforms.
My contention has long been that for the obvious utilities -- consolidation, optimization, business continuity solutions -- the hypervisor has real value but only evolutionary value. Best practices don’t fundamentally change. Architectures evolve but don’t revolutionarily change. Management architectures do, even sometimes radically change, but the tools of the trade don’t really do more than evolve.
During my tenure at VMware I was struck by the fact that there are areas (two distinctly) that are revolutionarily impacted by the hypervisor. Two areas that, at the time, VMware wasn’t only minimally invested in. These two areas -- virtual appliances and desktop virtualization -- have become, under new leadership, more highly invested in at VMware but still, I believe, entirely under invested by IT.
The first area, virtual appliances, seems to be the least impactful over the last few years. There seem to be only a few companies trying to create a business around this concept. Virtual appliances, or the use of a hypervisor by a software developer to disaggregate OS decisions for their customers, could potentially have a profound impact on software development models and dramatically change the call-to-arms between warring .NET and J2EE camps. The ultimate impact of a virtual appliance was captured, albeit only briefly, by BEA (since acquired by Oracle). Since BEA had a fast derivation of an OS (their implementation of JRocket JVM) they wedded ESX with an optimized JRocket and their app server to create a “bare metal” implementation of their software stack for Intel-based systems. No need for Windows, Linux, Solaris or any other OS. Simply install the stack with a single click and you have an optimized software solution installed and ready-to-run. BEA could pick and choose which OS-level components it wanted to optimize and deliver as part of the stack. While the utility is obvious for the customer the ultimate savings were never realized by BEA -- no longer having to choose a single or multiple development paths for different OS platforms -- as they were shortly thereafter acquired.
There are some companies out there trying to build a software development business around this technology. rPath is one that comes to mind. It’s also clear that VMware has increasingly invested in software distribution for their partners using the hypervisor in this manner. They have built a software distribution community through their corporate site focused on this. However the long term impact on the “traditional” developer model hasn’t yet developed. I have faith that at some point it will. Perhaps it needs a bit more consumer exposure?
The other area, desktop virtualization, has been an area of heavy VMware investment over the last couple of years. It’s also an area that I know quite a bit more about having spent a year in thin clients with Sun Microsystems and the last couple of years with a desktop virtualization start up (which I am no longer associated with). I have also done quite a bit of writing about this space (and will obviously continue to) -- see my very first blog entry. Stay tuned for tomorrow’s update for more on the evolution of the desktop and the impact of the hypervisor on OS delivery.
Monday, June 1, 2009
The End of an Era - Part 2
So I just finished my entry on Sun Microsystems when I realized that there was another very sad passing that hasn’t gotten the kind of press that Sun - Oracle has: SGI. Good old Silicon Graphics! Yes, I spent time at SGI as well... They too had some religion albeit not at all the same passion for it as Sun did. SGI was willing to change and grow their solution to fit customer demand. Unfortunately their leadership (post-McCracken) was interim in every way and didn’t seem to care too much about the impact their short term thinking would have on long-term business. Just for clarification I’m not speaking about the SGI leadership of 2000. I’m specifically speaking about the year of “Rocket Rick.” This would-be savior came from HP’s printer division where he was hailed as a visionary leader who knew how to manage commoditized technologies and could take SGI’s graphics leadership to the next stage.
What he did instead was give away the farm.
SGI had already committed to a Windows path and was already working on an NT-based workstation. One small mistake there... the wonderful, industry-leading, 35M transistor graphics engine designed specifically for the Visual Workstation was hardwired to the desktop’s motherboard. This meant that for 6 months the customer had the top-of-the-line performance. After that they were stuck. There was no easy way to upgrade the components. Ooops. Trip. Not fatal but certainly embarassing. It was hard for many engineers to foresee the commoditization of graphics. But that was happening in real time.
No, that wasn’t fatal. What was significantly contributory however was SGI’s stewardship of the graphics API’s known as OpenGL. This was the cornerstone of SGI’s IP leadership. Yes, it was open to everyone but it was so advanced that SGI had a hand in virtually every big graphics and big data solution on the planet. (Big data was critical too.) Rick Belluzzo, eager to please his future employer Microsoft engaged in Project Farenheit. What this was supposed to be was a graphics interoperability project between OpenGL and Direct3D. What it ended up becoming was a way for Microsoft to successfully stall OpenGL development for a year or two while Microsoft enrichened Direct3D to make up some of the gap in technology.
By itself this wasn’t enough to kill SGI but couple that with the decision to 1. spin off the technology that would ultimately make business intelligence visualization pioneer ePhiphan.y and 2. adopt the Itanium processor as the successor to MIPS and you have the recipe for disaster.
For me the funniest and most tragic moment in my career at SGI occurred on the day Rick Belluzzo was introduced to the employees. Firstly he accidentally referred to “us employees at HP...” (I’ll give him that mistake) and his comment that if we didn’t do our jobs and execute we’d all be walking around with Sun Microsystems badges by year end. That was something! Most of us were asking what was so wrong with that?! In retrospect Sun gets sold for $7B and SGI gets sold for $25M.
What he did instead was give away the farm.
SGI had already committed to a Windows path and was already working on an NT-based workstation. One small mistake there... the wonderful, industry-leading, 35M transistor graphics engine designed specifically for the Visual Workstation was hardwired to the desktop’s motherboard. This meant that for 6 months the customer had the top-of-the-line performance. After that they were stuck. There was no easy way to upgrade the components. Ooops. Trip. Not fatal but certainly embarassing. It was hard for many engineers to foresee the commoditization of graphics. But that was happening in real time.
No, that wasn’t fatal. What was significantly contributory however was SGI’s stewardship of the graphics API’s known as OpenGL. This was the cornerstone of SGI’s IP leadership. Yes, it was open to everyone but it was so advanced that SGI had a hand in virtually every big graphics and big data solution on the planet. (Big data was critical too.) Rick Belluzzo, eager to please his future employer Microsoft engaged in Project Farenheit. What this was supposed to be was a graphics interoperability project between OpenGL and Direct3D. What it ended up becoming was a way for Microsoft to successfully stall OpenGL development for a year or two while Microsoft enrichened Direct3D to make up some of the gap in technology.
By itself this wasn’t enough to kill SGI but couple that with the decision to 1. spin off the technology that would ultimately make business intelligence visualization pioneer ePhiphan.y and 2. adopt the Itanium processor as the successor to MIPS and you have the recipe for disaster.
For me the funniest and most tragic moment in my career at SGI occurred on the day Rick Belluzzo was introduced to the employees. Firstly he accidentally referred to “us employees at HP...” (I’ll give him that mistake) and his comment that if we didn’t do our jobs and execute we’d all be walking around with Sun Microsystems badges by year end. That was something! Most of us were asking what was so wrong with that?! In retrospect Sun gets sold for $7B and SGI gets sold for $25M.
The End of an Era - Part 1
It’s amazing how many people look at my CV and immediately ask the very same question: “So what about Sun Microsystems?” At least we have the next step defined: Oracle. The only remaining question is what’s next? Firstly as a former Sun employee I too drank the kool-aid. Funny metaphor in that it had the very same net result as the Koresh cult... death. Ultimately Sun’s undoing is due in no small part to that kool-aid. Technology is not a religion. It should never be treated as such.
As the former Director of Marketing for the Sun-Microsoft Collaboration and later as Director of Partner Operating System Marketing (my team and I were responsible for marketing all non-Solaris OS implementations on Sun’s hardware - including Windows, Red Hat, Suse, Umbuntu and VMware) I can tell you from first hand experience that Sun’s problems largely parallel the speed (or lack thereof) in deciding that selling solutions that customers ask for is what they should be doing. It took over two years for the Company to formally OEM VMware’s hypervisor and solutions stack. The Sun channel partners who wanted to sell VMware virtualization on Sun’s opteron-based server hardware usually sold an HP sku. Yes, sell Sun hardware and HP get’s a cut. That’s not a way to run a business! Some may even find it shocking that there was actually an Partner Operating System marketing organization. That, by itself, was progress for Sun. You should know that I reported to the VP of Solaris Marketing. Not an organizational structure designed for success.
So, what happens now? Oracle can quickly become one of the handful of end-to-end IT solutions providers -- along with IBM, HP, Microsoft/Dell, and perhaps Cisco at some point (I’ll blog on that later). Or, Oracle can decide to simply continue to focus on the software footprint and sell, close or spin-off the server and storage hardware parts. If I were a betting man I’d say that Oracle will look to use hardware to create appliances from much of their software stack and attempt to optimize the remaining hardware for the database and applications stack. (Which, by the way, they will fail to accomplish in the same way as everyone else in the commoditized platform business has failed to accomplish this. That after all is what commoditized means: everyone can build -- or have access to -- the same thing.)
Sun Microsystems will ultimately go the same way as DEC... remembered fondly in alumni groups until those too die off.
As the former Director of Marketing for the Sun-Microsoft Collaboration and later as Director of Partner Operating System Marketing (my team and I were responsible for marketing all non-Solaris OS implementations on Sun’s hardware - including Windows, Red Hat, Suse, Umbuntu and VMware) I can tell you from first hand experience that Sun’s problems largely parallel the speed (or lack thereof) in deciding that selling solutions that customers ask for is what they should be doing. It took over two years for the Company to formally OEM VMware’s hypervisor and solutions stack. The Sun channel partners who wanted to sell VMware virtualization on Sun’s opteron-based server hardware usually sold an HP sku. Yes, sell Sun hardware and HP get’s a cut. That’s not a way to run a business! Some may even find it shocking that there was actually an Partner Operating System marketing organization. That, by itself, was progress for Sun. You should know that I reported to the VP of Solaris Marketing. Not an organizational structure designed for success.
So, what happens now? Oracle can quickly become one of the handful of end-to-end IT solutions providers -- along with IBM, HP, Microsoft/Dell, and perhaps Cisco at some point (I’ll blog on that later). Or, Oracle can decide to simply continue to focus on the software footprint and sell, close or spin-off the server and storage hardware parts. If I were a betting man I’d say that Oracle will look to use hardware to create appliances from much of their software stack and attempt to optimize the remaining hardware for the database and applications stack. (Which, by the way, they will fail to accomplish in the same way as everyone else in the commoditized platform business has failed to accomplish this. That after all is what commoditized means: everyone can build -- or have access to -- the same thing.)
Sun Microsystems will ultimately go the same way as DEC... remembered fondly in alumni groups until those too die off.
Subscribe to:
Posts (Atom)