Cloud Computing has ushered in new ways of developing and delivering products. The emergence of Agile Development Practices has sped Cloud Computing adoption. Interestingly, executives struggle with leading organizations that must master both. Here, I explore lessons learned from helping companies deal with these two dynamics.
Thursday, July 30, 2009
Conspiracy at Cloud Camp Boston
I attended CloudCamp Boston last night. If you did not attend, you missed out on an excellent unconference. Many thanks to the sponsors for picking up the food and bar tab, and providing the meeting space. Special thanks to Matthew Rudnick and Wayne Pauley who lead Silver Lining, The Boston Cloud Computing User Group for jump-starting the event.
The 350 registered attendees heard Judith Hurwitz and John Treadway give a good overview of Cloud Computing with some of the latest definitions and terms. Most of the action came in the sessions where attendees could ask their questions in a group setting, hear various opinions, and have a discussion to seek understanding. The hallway discussions were happening everywhere. There was an urgency to the give and take of information so the attendees could get to the next person on their list.
Let's face it, the concept of Cloud Computing is vague, unfamiliar, emerging, and complex. I applaud those who are overcoming the inclination to wait until the dust settles before they learn about it. They are sorting through the hype, 'cloudwashing' (a play on whitewashing), pre-announcements, and unproven pundit claims to uncover what they need to learn and, most importantly, unlearn. The common answer to their questions was, 'it depends'. Still they persisted in refining their questions and seeking why it depends.
Apparently, there is a controversy surrounding the concept of 'private cloud'. Some maintain that a private cloud is nothing more than a move by existing IT types to keep their jobs and hardware vendors to keep up their hardware sales to enterprises. Has Oliver Stone been seen lurking around Armonk lately?
Putting conspiracy theories aside for a moment, my brief description of a private cloud is cloud computing done internally. Our NIST friends would agree in principle with this definition. For example, if one could package up all of AWS's tools, software, hardware, and operational knowledge, and actually operate their own resources with the same capability and efficiency as AWS does, that would be an example of a private cloud. A private cloud duplicates the same level of automation, process control, programatic control, scale, multi-tenancy, security, isolation, and cost-efficiency as a public cloud. There may be some internal data centers that are today operating as efficiently as AWS's public cloud and could claim that they are operating a private cloud. However, a person who points to an hodgepodge of machines maintained by an army of administrators claiming that he has a private cloud would have difficulty proving his case.
If hardware vendors and IT types are promoting private clouds to save themselves, they may have grabbed an anchor instead of a life-preserver.
Labels:
Cloud Computing,
CloudCamp,
CloudCamp Boston
Tuesday, July 28, 2009
Safe Bet
Microsoft's Azure pricing was announced earlier this month. There have been a few blog posts publishing the numbers and comparing prices. The bottom line is that pretty much[1] Microsoft priced their offerings at price parity with Amazon Web Services. The question that kept coming to mind was 'Why parity?'.
Microsoft has market dominance, a relatively captive developer audience, large data center experience, and cash. Azure is designed to remotely run customer code under their control on Microsoft's software stacks. The Azure developer experience is similar in style to the desktop development experience. Azure should be efficient since they are leveraging Microsoft's massive data centers and operational expertise. They have the capital for a prolonged battle.
Meanwhile, AWS prices have been relatively fixed for some time. AWS storage and small-compute instances have remained the same for years. While Amazon has offered new services like reserved instances at lower prices, and tiered outgoing bandwidth prices, the US pricing has remained unchanged. This is an amazing feat given how technology prices fall over time. Sounds like a pricing target to me.
Why not get banner headlines by undercutting AWS? Governments would not blink if Microsoft took on the world's largest on-line retailer on price. Would they? Azure is late and behind. Wouldn't lower prices demonstrate that Microsoft is serious about Azure and Cloud Computing? Azure has the benefit of using modern hardware in a market with two year old pricing. Microsoft has their own large data centers in low cost locations. Couldn't Azure use these for their advantage? If anyone could take on AWS on price, Azure could do it.
Why wasn't Azure's pricing set lower? I don't know the answer. I suspect that, years ago, AWS set aggressive, forward-looking prices based on future efficiencies that they felt they would achieve. They have pretty much executed on plan. If so, there isn't much pricing room for a newcomer to undercut them. Given the large capital investments, automation complexities, low price-per-unit, high unit volumes, and thin margins, any small pricing mistake will compound and drastically affect the bottom line. Azure went with the safe bet, pricing parity.
AWS's pricing may be one the greatest barriers to entry for the next generation of computing. If so, talk about beginner's luck.
[1] There may be a few exceptions with Microsoft's Web Role and SQL Azure. Web Role and AWS CloudWatch price difference may be based on different technical approaches and the resources used, with Web Role potentially being the higher of the two. The SQL Azure price per month has processing and storage bundled, whereas, SimpleDB prices storage and compute component based upon actual usage per month.
Microsoft has market dominance, a relatively captive developer audience, large data center experience, and cash. Azure is designed to remotely run customer code under their control on Microsoft's software stacks. The Azure developer experience is similar in style to the desktop development experience. Azure should be efficient since they are leveraging Microsoft's massive data centers and operational expertise. They have the capital for a prolonged battle.
Meanwhile, AWS prices have been relatively fixed for some time. AWS storage and small-compute instances have remained the same for years. While Amazon has offered new services like reserved instances at lower prices, and tiered outgoing bandwidth prices, the US pricing has remained unchanged. This is an amazing feat given how technology prices fall over time. Sounds like a pricing target to me.
Why not get banner headlines by undercutting AWS? Governments would not blink if Microsoft took on the world's largest on-line retailer on price. Would they? Azure is late and behind. Wouldn't lower prices demonstrate that Microsoft is serious about Azure and Cloud Computing? Azure has the benefit of using modern hardware in a market with two year old pricing. Microsoft has their own large data centers in low cost locations. Couldn't Azure use these for their advantage? If anyone could take on AWS on price, Azure could do it.
Why wasn't Azure's pricing set lower? I don't know the answer. I suspect that, years ago, AWS set aggressive, forward-looking prices based on future efficiencies that they felt they would achieve. They have pretty much executed on plan. If so, there isn't much pricing room for a newcomer to undercut them. Given the large capital investments, automation complexities, low price-per-unit, high unit volumes, and thin margins, any small pricing mistake will compound and drastically affect the bottom line. Azure went with the safe bet, pricing parity.
AWS's pricing may be one the greatest barriers to entry for the next generation of computing. If so, talk about beginner's luck.
[1] There may be a few exceptions with Microsoft's Web Role and SQL Azure. Web Role and AWS CloudWatch price difference may be based on different technical approaches and the resources used, with Web Role potentially being the higher of the two. The SQL Azure price per month has processing and storage bundled, whereas, SimpleDB prices storage and compute component based upon actual usage per month.
Friday, July 10, 2009
Vapor Versus Vendor
I'm interested in cloud versus owned IT cost comparisons. Understanding how organizations break out costs and set up the comparisons is insightful to their views and thinking. Some comparisons don't include the multitude of overheads, for example, management, facilities, procurement, engineering, and taxes because these costs are hidden or difficult to estimate.
A friend sent me a pointer to the up.time blog post. The author is comparing his in-house test environment against running a similar test infrastructure on AWS. AWS is for commonly used for building test environments. The post does a good job of breaking out in-house costs and tracking AWS expenses. The author factors some overheads into the analysis. The result? AWS costs $491,253.01 per year ($40,937.75 per month) more than his in-house environment. Wow!
There must be some mistake. The author must have left out something.
I can argue some minor points. For example, an average month has 30.5 days instead of 31 days, trimming about $1,000.00 per month off the $64,951.20 in instance charges. Another could be that the overhead should include additional overheads mentioned above. These minor points may add up to $5,000.00 per month, a far cry from explaining the $40,937.75 per month delta.
Looking a bit deeper, there is a big cost difference between 150 Linux and 100 Windows instances. Breaking out a baseline of Linux costs (to cover AWS machine, power and other costs) versus the additional cost for Windows. The baseline price for 302 Linux small and large instances is $22,915.20 per month. The Windows premium is $0.025 per CPU hour and that works out to $2,827.00 per month for 152 Windows instances. The cost for Windows and SQL Server running on 152 instances is $42,036 per month. Hence, the SQL Server premium is approximately $39,171.60 per month. The author pays $4,166.00 per month for his in-house database licenses. The premium paid for SQL Server on AWS is approximately $35,005.06 per month.
Most of the $40,937.75 per month cost disadvantage for the cloud can be explained by the AWS pricing for Microsoft Windows at $2,827.00 per month and SQL Server at $35,005.06 per moth. If the author would allow me, I could haggle the overheads and other minor issues to close the remainder of the gap. But, that's not the real issue here.
The pricing for Windows and SQL Server on AWS is not competitive with Microsoft's purchase pricing. Paying almost 10x more is not reasonable. The author points out that ISVs normally have agreements with other ISVs to use their software in test environments for low or no fees. If the test environment needs Windows or SQL Server, you'll have to pay a lot for it at AWS.
One last point, the author wondered if anyone does resource flexing in the cloud. As I pointed out at the beginning of my post, AWS is commonly used for testing because people can scale-up their resource usage prior to release deadlines and when testing at scale. They reduce their resource usage when the need passes. Hence, resource utilization and speed to acquire incremental resources are additional factors to consider.
A friend sent me a pointer to the up.time blog post. The author is comparing his in-house test environment against running a similar test infrastructure on AWS. AWS is for commonly used for building test environments. The post does a good job of breaking out in-house costs and tracking AWS expenses. The author factors some overheads into the analysis. The result? AWS costs $491,253.01 per year ($40,937.75 per month) more than his in-house environment. Wow!
There must be some mistake. The author must have left out something.
I can argue some minor points. For example, an average month has 30.5 days instead of 31 days, trimming about $1,000.00 per month off the $64,951.20 in instance charges. Another could be that the overhead should include additional overheads mentioned above. These minor points may add up to $5,000.00 per month, a far cry from explaining the $40,937.75 per month delta.
Looking a bit deeper, there is a big cost difference between 150 Linux and 100 Windows instances. Breaking out a baseline of Linux costs (to cover AWS machine, power and other costs) versus the additional cost for Windows. The baseline price for 302 Linux small and large instances is $22,915.20 per month. The Windows premium is $0.025 per CPU hour and that works out to $2,827.00 per month for 152 Windows instances. The cost for Windows and SQL Server running on 152 instances is $42,036 per month. Hence, the SQL Server premium is approximately $39,171.60 per month. The author pays $4,166.00 per month for his in-house database licenses. The premium paid for SQL Server on AWS is approximately $35,005.06 per month.
Most of the $40,937.75 per month cost disadvantage for the cloud can be explained by the AWS pricing for Microsoft Windows at $2,827.00 per month and SQL Server at $35,005.06 per moth. If the author would allow me, I could haggle the overheads and other minor issues to close the remainder of the gap. But, that's not the real issue here.
The pricing for Windows and SQL Server on AWS is not competitive with Microsoft's purchase pricing. Paying almost 10x more is not reasonable. The author points out that ISVs normally have agreements with other ISVs to use their software in test environments for low or no fees. If the test environment needs Windows or SQL Server, you'll have to pay a lot for it at AWS.
One last point, the author wondered if anyone does resource flexing in the cloud. As I pointed out at the beginning of my post, AWS is commonly used for testing because people can scale-up their resource usage prior to release deadlines and when testing at scale. They reduce their resource usage when the need passes. Hence, resource utilization and speed to acquire incremental resources are additional factors to consider.
Wednesday, July 8, 2009
Will Google's Chrome OS be successful?
Google announced it's Chrome OS project last night. Google is developing a secure, simple and fast PC OS focused on web applications. This is a forward looking move given cloud computing and Software-As-A-Service's projected growth. Will Chrome OS succeed? I see a few trouble spots in Google's blog post that they will need to overcome internally to have a chance at success.
Trouble spot #1: Because we're already talking to partners about the project, and we'll soon be working with the open source community, we wanted to share our vision now so everyone understands what we are trying to achieve. .... we are going back to the basics and completely redesigning the underlying security architecture of the OS so that users don't have to deal with viruses, malware and security updates. It should just work. .... Google Chrome running within a new windowing system on top of a Linux kernel."
Google claims to know how to do a secure Linux OS (Linus must be thrilled), simple distribution (yet another Linux distribution) and fast windowing system (yep, Linux is weak here), and they're sharing their vision (via a blog post) with the open source communities. Hubris usually doesn't go far with open source communities.
Trouble spot #2: "Google Chrome OS is a new project, separate from Android. Android was designed from the beginning to work across a variety of devices from phones to set-top boxes to netbooks. Google Chrome OS is being created for people who spend most of their time on the web, and is being designed to power computers ranging from small netbooks to full-size desktop systems."
Google has two emerging and competing OS projects. Both are each up against strong, entrenched competitors. 'Netbooks' are an emerging market with a few OEMs (Freescale, Acer) planning to use Android as their OS. While Google has the extra cash to fund competing projects, OEMs and retailers don't have the resources to support both. They need to invest to make one succeed.
Trouble spot #3 "We have a lot of work to do, and we're definitely going to need a lot of help"
Given today's economy, everyone is willing to spend their abundant resources and time to help Google become more powerful. Right? Yeah, I thought so too.
Trouble spot #1: Because we're already talking to partners about the project, and we'll soon be working with the open source community, we wanted to share our vision now so everyone understands what we are trying to achieve. .... we are going back to the basics and completely redesigning the underlying security architecture of the OS so that users don't have to deal with viruses, malware and security updates. It should just work. .... Google Chrome running within a new windowing system on top of a Linux kernel."
Google claims to know how to do a secure Linux OS (Linus must be thrilled), simple distribution (yet another Linux distribution) and fast windowing system (yep, Linux is weak here), and they're sharing their vision (via a blog post) with the open source communities. Hubris usually doesn't go far with open source communities.
Trouble spot #2: "Google Chrome OS is a new project, separate from Android. Android was designed from the beginning to work across a variety of devices from phones to set-top boxes to netbooks. Google Chrome OS is being created for people who spend most of their time on the web, and is being designed to power computers ranging from small netbooks to full-size desktop systems."
Google has two emerging and competing OS projects. Both are each up against strong, entrenched competitors. 'Netbooks' are an emerging market with a few OEMs (Freescale, Acer) planning to use Android as their OS. While Google has the extra cash to fund competing projects, OEMs and retailers don't have the resources to support both. They need to invest to make one succeed.
Trouble spot #3 "We have a lot of work to do, and we're definitely going to need a lot of help"
Given today's economy, everyone is willing to spend their abundant resources and time to help Google become more powerful. Right? Yeah, I thought so too.
Labels:
chrome os,
Cloud Computing,
google,
netbooks,
open source
Tuesday, July 7, 2009
Does open source have a future in the clouds?
I have just read Stephen O'Grady's well-written post on open source and cloud computing. Everyone should read his post. He makes excellent arguments outlining a pessimistic future of open source in the cloud.
The cloud market is new and has a long way to go before a few dominant players control a mature, stable market. Many customers desire that the cloud market maturity is quickly reached, the dominant market leaders are obvious, standards are well-understood, prices are falling year-over-year, and the technology path forward is relatively risk free. Someone at the Boston All-Things-Cloud meet-up mentioned that innovation will happen quickly in the cloud and that the market will rapidly mature. Don't get me wrong. I want the same things. However, high-tech markets don't develop as quickly as desired, nor as projected.
While speed of technology development has been increasing, the pace in which humans can comprehend it, assess the risks, and plan its usage has been a relatively slow constant. Customers will take a while to comprehend this new market's potential. Even if customers understand everything about it, market maturity is a long way down the road.
What does this point have to do with predicting open source's sad fate? Cloud computing will take a lot of time to develop and refine itself giving open source projects time to adapt. Open source projects are built to adapt by their very nature. Advances with cloud computing will benefit how open source projects are done in ways that we cannot comprehend today. So, maybe the future isn't so bleak.
Let's look at the nature of open source projects. They are a mixture of leading edge (see Eric Raymond's The Cathedral and the Bazaar first lesson) and re-engineering (Eric's second lesson) efforts. Open source developers commonly start with a clear need and usage pattern. They publish the code under a license to encourage other developers to extend the project for additional uses and contribute the extension back to the original project. Successful projects change and adapt because different developers are 'scratching various itches'. All it takes is one motivated project member to adapt (or initiate) a project for the cloud.
A common open source project problem has been finding secure, affordable, and usable Internet-connected equipment. In the past, well-meaning hardware providers would contribute equipment to open source projects, only to find that the open source projects did not have the financial means to operate and maintain the equipment at an hosting provider. Cloud computing provides by-the-hour resources for testing and development that individual project members can easily afford. Previously, open source projects that required large amounts of resources for short periods of time were impractical. Now, they are affordable.
AWS's rise to early dominance in the market was due to their early appeal and familiarity to Linux developers. Open source projects can make their binaries available via Amazon Machine Images (AMIs). Other project members can instantiate the AMIs as their own AWS instances and quickly have the project code running. This has helped boost both the AWS developer ecosystem and open source projects. Here are two examples, cloudmaster project and AMI, and mapnik + tilecash projects and their AMI. While I don't have any specific examples, I would not be surprised if a few open source projects have put out a 'donation' AMI to create a cash stream to offset their development costs.
Stephen correctly pointed out that there is currently no turn-key IAAS open source project. I would expect that it will take time for a collective open source effort to piece the right open source projects together to address this need. There are reasons to believe that it will be done. For example, Amazon Web Services (AWS) used many open source projects to build their offerings. They have contributed back to open source projects. I was part of an effort to build an IAAS offering. We were surprised to find how much of the networking, security and virtualization infrastructure needed for an IAAS offering already existed in open source projects.
Open source innovators are not idle. New open source projects are providing needed Internet-scale technology that today are proprietary. Take a look at Chuck Wegrzyn's Twisted Storage Project (to be renamed FreeSTORE) as an example open source project contributing Internet-scale storage technology. I'm guessing that others are out there too.
To be fair, one cannot build an equivalent to AWS from open source projects today. Thus far, AWS has yet to contribute their wholly owned and developed source code to the open source community. If AWS determines that contributing the code to the open source community is a move that would be in their best interest, it's a play that would assure open source's future in the clouds. It may even accelerate standards and cloud maturity. Hmmmm, maybe the guy at All-things-cloud was right.
The cloud market is new and has a long way to go before a few dominant players control a mature, stable market. Many customers desire that the cloud market maturity is quickly reached, the dominant market leaders are obvious, standards are well-understood, prices are falling year-over-year, and the technology path forward is relatively risk free. Someone at the Boston All-Things-Cloud meet-up mentioned that innovation will happen quickly in the cloud and that the market will rapidly mature. Don't get me wrong. I want the same things. However, high-tech markets don't develop as quickly as desired, nor as projected.
While speed of technology development has been increasing, the pace in which humans can comprehend it, assess the risks, and plan its usage has been a relatively slow constant. Customers will take a while to comprehend this new market's potential. Even if customers understand everything about it, market maturity is a long way down the road.
What does this point have to do with predicting open source's sad fate? Cloud computing will take a lot of time to develop and refine itself giving open source projects time to adapt. Open source projects are built to adapt by their very nature. Advances with cloud computing will benefit how open source projects are done in ways that we cannot comprehend today. So, maybe the future isn't so bleak.
Let's look at the nature of open source projects. They are a mixture of leading edge (see Eric Raymond's The Cathedral and the Bazaar first lesson) and re-engineering (Eric's second lesson) efforts. Open source developers commonly start with a clear need and usage pattern. They publish the code under a license to encourage other developers to extend the project for additional uses and contribute the extension back to the original project. Successful projects change and adapt because different developers are 'scratching various itches'. All it takes is one motivated project member to adapt (or initiate) a project for the cloud.
A common open source project problem has been finding secure, affordable, and usable Internet-connected equipment. In the past, well-meaning hardware providers would contribute equipment to open source projects, only to find that the open source projects did not have the financial means to operate and maintain the equipment at an hosting provider. Cloud computing provides by-the-hour resources for testing and development that individual project members can easily afford. Previously, open source projects that required large amounts of resources for short periods of time were impractical. Now, they are affordable.
AWS's rise to early dominance in the market was due to their early appeal and familiarity to Linux developers. Open source projects can make their binaries available via Amazon Machine Images (AMIs). Other project members can instantiate the AMIs as their own AWS instances and quickly have the project code running. This has helped boost both the AWS developer ecosystem and open source projects. Here are two examples, cloudmaster project and AMI, and mapnik + tilecash projects and their AMI. While I don't have any specific examples, I would not be surprised if a few open source projects have put out a 'donation' AMI to create a cash stream to offset their development costs.
Stephen correctly pointed out that there is currently no turn-key IAAS open source project. I would expect that it will take time for a collective open source effort to piece the right open source projects together to address this need. There are reasons to believe that it will be done. For example, Amazon Web Services (AWS) used many open source projects to build their offerings. They have contributed back to open source projects. I was part of an effort to build an IAAS offering. We were surprised to find how much of the networking, security and virtualization infrastructure needed for an IAAS offering already existed in open source projects.
Open source innovators are not idle. New open source projects are providing needed Internet-scale technology that today are proprietary. Take a look at Chuck Wegrzyn's Twisted Storage Project (to be renamed FreeSTORE) as an example open source project contributing Internet-scale storage technology. I'm guessing that others are out there too.
To be fair, one cannot build an equivalent to AWS from open source projects today. Thus far, AWS has yet to contribute their wholly owned and developed source code to the open source community. If AWS determines that contributing the code to the open source community is a move that would be in their best interest, it's a play that would assure open source's future in the clouds. It may even accelerate standards and cloud maturity. Hmmmm, maybe the guy at All-things-cloud was right.
Wednesday, July 1, 2009
What does it take to profitably compete against Amazon Web Services?
I've been concerned about this question for the past decade. I did not realize it until after Amazon's Web Services became popular. How is that possible? Allow me to explain.
I joined Sun Microsystems over a decade ago intrigued by Jim Waldo, Bill Joy, and others' vision of how computing would change as low-cost CPUs, storage and networks emerge, and as distributing computing becomes accessible to large numbers of developers. A key question was 'how would businesses make money?' The particular technology and business that we imagined did not pan out as we had hoped. As it turns out, we were too early.
Fast forward to a few years ago as many explored how utility and grid computing capabilities could be monetized. Sun Grid and the 'dollar per CPU hour' was advanced as a possible Infrastructure-As-A-Service (IAAS) model. The Sun CTO organization began a research effort, dubbed Project Caroline, to investigate technology for the not-yet-named Platform-As-A-Service (PAAS) space. As part of the Project Caroline effort, we built a model data center to evaluate the technology's potential. Still, the question, 'how to make money?', loomed.
Shortly afterward, Amazon Web Services began rolling out their offerings. They instantly appealed to developers. They captured the early IAAS market with fast-paced growth to the dominant position. Their offerings were straight forward and aggressively priced.
It clicked. I needed to understand how an IAAS could compete against Amazon Web Services. There are good strategies for competing against an early leader an emerging market. Going head-to-head with the leader in a low-margin, capital intensive market is an unlikely strategy. However, knowing what it would take to directly compete with Amazon Web Services would be informative and instructive.
As I talked with people, the complexity became overwhelming. I decided to construct a model to allow evaluation of many potential scenarios. For example, what if I wanted to get to market quickly by using a 24x7 operations center staffed by people who manage the configuration and management of the customer resources? What if, I purchased software to automate their tasks? What if I built the software? Another example, what if I used 'pizza box' systems for customer nodes? What about mid-range systems? What about blade systems? On the operations side, what if power rates doubled? What if I doubled capacity? How much does utilization play a factor? Rich Zippel joined the modeling effort.
As we refined the model, a few findings stood out. For example, utilization of the customer resources, equipment selection, software license costs paid by the IAAS provider, and the level of automation of customer resource configuration/management are major factors in determining profitability. Other factors such as, power costs and automation of infrastructure systems' management are factors that have less of an impact than we had expected.
I presented some of the scenarios and findings last night at the Boston Cloud Services meet up. The audience had great questions and suggestions. For example, is there any way to increase utilization beyond 100% (overbooking as airlines do) knowing full well that you'll deny someone's request when they access their resource? This could be modeled, however, the answer would have to play to the customer satisfaction and SLA requirements. Would more power systems be more cost effective than the low-end HP systems modeled? The model allows various system types and their infrastructure impacts to be compared. I modeled the lower-end systems because they, in general, ended up being more cost effective compared to types of systems. However, modeling of more systems should be done.
If you have other questions or suggestions, feel free to let me know.
I joined Sun Microsystems over a decade ago intrigued by Jim Waldo, Bill Joy, and others' vision of how computing would change as low-cost CPUs, storage and networks emerge, and as distributing computing becomes accessible to large numbers of developers. A key question was 'how would businesses make money?' The particular technology and business that we imagined did not pan out as we had hoped. As it turns out, we were too early.
Fast forward to a few years ago as many explored how utility and grid computing capabilities could be monetized. Sun Grid and the 'dollar per CPU hour' was advanced as a possible Infrastructure-As-A-Service (IAAS) model. The Sun CTO organization began a research effort, dubbed Project Caroline, to investigate technology for the not-yet-named Platform-As-A-Service (PAAS) space. As part of the Project Caroline effort, we built a model data center to evaluate the technology's potential. Still, the question, 'how to make money?', loomed.
Shortly afterward, Amazon Web Services began rolling out their offerings. They instantly appealed to developers. They captured the early IAAS market with fast-paced growth to the dominant position. Their offerings were straight forward and aggressively priced.
It clicked. I needed to understand how an IAAS could compete against Amazon Web Services. There are good strategies for competing against an early leader an emerging market. Going head-to-head with the leader in a low-margin, capital intensive market is an unlikely strategy. However, knowing what it would take to directly compete with Amazon Web Services would be informative and instructive.
As I talked with people, the complexity became overwhelming. I decided to construct a model to allow evaluation of many potential scenarios. For example, what if I wanted to get to market quickly by using a 24x7 operations center staffed by people who manage the configuration and management of the customer resources? What if, I purchased software to automate their tasks? What if I built the software? Another example, what if I used 'pizza box' systems for customer nodes? What about mid-range systems? What about blade systems? On the operations side, what if power rates doubled? What if I doubled capacity? How much does utilization play a factor? Rich Zippel joined the modeling effort.
As we refined the model, a few findings stood out. For example, utilization of the customer resources, equipment selection, software license costs paid by the IAAS provider, and the level of automation of customer resource configuration/management are major factors in determining profitability. Other factors such as, power costs and automation of infrastructure systems' management are factors that have less of an impact than we had expected.
I presented some of the scenarios and findings last night at the Boston Cloud Services meet up. The audience had great questions and suggestions. For example, is there any way to increase utilization beyond 100% (overbooking as airlines do) knowing full well that you'll deny someone's request when they access their resource? This could be modeled, however, the answer would have to play to the customer satisfaction and SLA requirements. Would more power systems be more cost effective than the low-end HP systems modeled? The model allows various system types and their infrastructure impacts to be compared. I modeled the lower-end systems because they, in general, ended up being more cost effective compared to types of systems. However, modeling of more systems should be done.
If you have other questions or suggestions, feel free to let me know.
Subscribe to:
Posts (Atom)