Saturday, October 15, 2011

Understanding cloud provisioning and Deployment Components

Understanding cloud provisioning and Deployment Components

In this post I attempt to deconstruct -- what is hyped by cloud deployment, which is essentially a provisioning automation of a virtual image in a virtualized environment. --- U want to call that cloud ---so be it...but I would be cautious of classifying simple provisioning as 'Cloud'


I. cloud provisioning:

I use IWD as an example - IBM Workload Deployer is a WebSphere centric provisioning automation tool with an appliance form factor. Like any appliance IWD is also a computer and posses similar constraints like finite capacity, network dependency and processing constraints. It is important to understand that, as various parallel actions can stress the application performance of the appliance itself. For instance several parallel large deployments will increase network activity and in absence of a robust network can cause the appliance to slow down.

a. Logs: The log files are stored on the appliance. The viewable logs can be viewed directly from the appliance using the user interface or they can be downloaded to your local file system for review. You can also collect your log data using the command-line interface.

b. Audit Logs: The audit data can be viewed directly from the appliance using the user interface or they can be downloaded to your local file system for review. These logs provide audit type information such as a user specific activity.

c. Connectivity to Core Infrastructure: Ensure that IWD is able to connect to Infrastructure such as a DNS, NTP, LDAP AND Deployment Target etc.

d. Supported levels of SW and Virtual environment: Always refer to current levels and features of various virtualized ‘deployment’ environments.

II. Network:

Every enterprise has a unique network infrastructure for accessing servers and allowing applications to communicate between components. Various layers support the management of network addressing, deliver critical services, and ensure security. The infrastructure includes specific addressing (sub-nets), address services like DHCP/DNS, identity and directory services like LDAP, and firewalls and routing rules – all reflecting the specific requirements and evolution of the given enterprise.


a. Understanding VLANs: Many network administrators are probably using VLANs without you knowing, as there are many benefits when running a large network.

  • Limits packets to a subset of the company network backbone
  • Limits pointless network flow of packets between non-server machines
  • Increases performance
  • Provides security -- using VLANs to limit hostile connections (for example, direct Internet connection)
  • Allows you to monitor network traffic for planning

b. Latency: In networking, bandwidth represents the overall capacity of the connection. The greater the capacity, the more likely that better performance will result. Bandwidth is the amount of data that passes through a network connection over time as measured. Network tools like ping and traceroute measure latency by determining the time it takes a given network packet to travel from source to destination and back, the so-called round-trip time. Round-trip time is not the only way to specify latency, but it is the most common.

Factors that contribute to network latency

  • Transmission delays: Transmission refers to the medium used to transmit the information. This may be a phone line, fiber optic line or wireless connection, just to name a few examples. Each will contribute to the delay in some way. Some may be faster mediums than other. To help reduce network latency, it may be possible to change the medium to a faster type.
  • Propagation delays: Propagation is one of the harder things to control in network latency. Simply put, this is the physical distance between the origin and destination. Naturally, the greater the distance, the more delayed the transmission will be. However, this does not usually cause a significant delay.
  • Routers and computer hardware delays: The other contributors to network latency, routers and computer hardware, may be able to be changed. In those cases, upgrading this hardware can help process information faster, thus speeding up the process. While this may involve a substantial investment, the benefits may be worth the investment; depending on how much and how often data is transferred

Tools: Ping test, traceroute, netstat, lsof

e. Security: Network security and security policy in general is complex topic and often times require longer discussion on either easing current security policies or working around them to demonstrate the IWD features. Security policy includes creating of usage policy rules and procedures that all IT players adhere to. Security policy includes established the access levels such as super admin, admin, backup operator, user etc. By assigning appropriate resource access levels restricts access to critical resources only to authorized personnel. Firewalls, proxy servers, gateways, and email servers need to be given highest levels of security.

The security provisions typically include the following:

  1. Firewalls, proxy servers, or gateway configuration
  2. Access Control Lists (ACLs) formation and implementation
  3. SNMP configuration and monitoring
  4. Security hot fixes to software of various devices, operating systems, and applications.
  5. Backup and restore procedures

Tools: Nmap, Trace route

III. Deployment Targets:

a. Virtualized Environments: A traditional data center offers finite capacity in support of business applications, but it is ultimately limited by obvious constraints (physical space, power, cooling, etc.). Virtualization has extended the runway a bit, effectively increasing density within the data center, however the physical limits remain. Cloud computing opens the door to huge pools of computing capacity worldwide. This “infinite” capacity is proving tremendously compelling to IT organizations, providing on-demand access to resources to meet short and long-term needs. The emerging challenge is integration—combining these disparate environments to provide a seamless and secure platform for computing services.

b. Permission and Access control: The flexibility offered by virtualization centric cloud deployments have enabled resource pooling but also have increased security risks. Systems that were previously physically isolated, due to virtualization share resources, and this presents the opportunity to new attacks in an environment that enables use of shared resources and pooling of such. To combat this network and platform administrators have resorted to tighter security policies. These restrictive policies while intend to protect the shared infrastructure, adds complexity to IWD based deployments. Hence it is vital to understand the underlying security, permissions and access control of the deployments environment.

The authorization to perform tasks in most virtualized platform is governed by an access control system. Any access control system allows the administrator/Management to specify with granularity the users or groups can perform tasks on various objects. It is defined using three primary concepts:

Privilege — the ability to perform a specific action or read a specific property. Examples include powering on a virtual machine and creating an alarm.

Role — A collection of privileges. Roles provide a way to aggregate all the individual privileges that are required to perform a higher-level task, such as administer a virtual machine.

Object — an entity upon which actions are performed. E.g. VMWARE VirtualCenter objects are datacenters, folders, resource pools, clusters, hosts, and virtual machines.

*Important* We need to ensure: That the user id defined on IWD has the right privileges, roles and access to objects that are necessary to deploy a VM/HVE to the target host, and access to objects such as folders – so we can write to the disk, and can stop and start the virtual machines.


These may seem obvious things to consider, however, I wanted to add and focus on complexity this poses to the traditional ways to provision middleware. Well, no pain no gain, and as many organization embark on this journey of provisioning automation or what some have artistically labeled private cloud... this change induced can be 'disruptive' and should be factored in before boarding the ship to IaaS/Paas land...

Thoughts?

:)

Nitin

Saturday, October 1, 2011

Devising equitable chargeback models

I have always thought that besides technology an equitable charge-back model is a equally important for a shared infrastructure --- which at some point is what cloud computing is based upon...

In many organizations, both large and small, the methodologies used in devising a charge-back model are overly simplistic and dated. New technologies and application development patterns either render these models inadequate for today or not reflective of how resources are actually used by applications or business units. Devising an equitable charge-back model can be the most complex of all challenges involved in adopting Virtual Enterprise. Charge-back models are complex for many reasons, one of which is the ability to arrive at a cost model that is agreeable to all business units, another being the complications of formulating the fixed and variable cost distribution of the underlying shared infrastructure. In some cases, this could even lead to obtaining a business analyst just for this task. Let's look at some charge-back considerations for ways to simplify the charge-back model.

Charge-back models

First, what exactly does "chargeback" mean? In consolidated environments, the IT custodial service employs a cost recovery mechanism that institutes a fee-for-service type of model. Chargeback enables IT to position their operations as value add services and uses the cost recovery mechanism to provide varying degrees of service levels at differentiating costs. To devise an effective chargeback model it is imperative that the IT organization have a complete understanding of their own cost structure, including a cost breakdown by components used as resources, a utility-like model to justify the billing costs associated with resource use. When it comes to employing chargeback models, there is no silver bullet that will address all the perceptions and user expectations from an IT services commodity model. There are several models prescribed and practiced in the industry today, and each will be a different cultural and operational fit for an organization. A few chargeback models are described below, but a hybrid model combining select features of more than one model is also an option.

  • Standard subscription model

    The simplest of all models, this one involves dividing up the total operating costs of the IT organization by the total number of applications hosted by the environment. This type of cost recovery is simple to calculate, and due to the appeal of its simplicity, finds its way into many organizations. The year to year increase in IT costs due to growth and expansion is simply added to the subscribers̢۪ costs. While this is a simple chargeback model, it is fundamentally flawed, as it promotes subsidy and unequal allocation of resources (with this model, a poorly performing application is subsidized by other applications) and de-emphasizes resource consumption and application footprints.

  • Pay per use model

    This model is targeted for environments with LOBs of various sizes and, unlike the standard subscription model, emphasizes charging based on an application̢۪s actual consumption of resources and SLA. For example, an LOB might pay more for shared services simply because its application requires a larger footprint, or because they desire a higher degree of preference, more dedicated resources, or a more demanding service policy. This model can be complicated in its approach, simply due to the framework around resource usage and monitoring, but it ensures fair and equitable cost recovery. The downside is that it might take longer to arrive at agreeable metrics and cost models associated with resource consumption.

  • Premium pricing model

    The premium pricing model focuses on class of service and guaranteed availability of resources for business applications. As the name suggests, LOBs will incur a premium for preferential treatment of application requests and priority in resource allocation during times of contention to fully meet the service goals. This could also include a dedicated set of hardware nodes to host applications, so depending on the degree of isolation and separation from the shared services model, the cost can increase. Such models are often preferred by LOBs with mission critical and high revenue impact applications. This model will typically coexist with other baseline chargeback models, since not all applications or LOBs in an organization will require premium services.

  • Sample hybrid model

    The hybrid model combines the advantages of multiple chargeback models to best suit an organization. For instance, a hybrid model could have a flat entry fee per application to cover the cost of the base infrastructure, and then pay for actual resources consumed. The flat fee can be characterized as a fixed expense to be hosted, and the additional costs for resource consumption can be linked to variable cost. This example combines the standard subscription model and the pay per use model into a utility-like billing service. Since there is no single chargeback model that will fit all environments, it is common to see several model types combined.

While there are advantages of using one chargeback model for an entire organization, there might also be value in introducing a catalog of chargeback models and offer a choice to LOBs. This type of flexibility enables an LOB to select the most suitable chargeback model for their needs, and avoids the challenge of getting all business units to agree on one model. However, multiple chargeback models only work if all models are comparable in fairness and value; all things being equal, the same application should have same operating cost under every model, but differing costs when varying volume, Quality of Service, chosen service policy, and so on.

Thoughts?


:)

Nitin

2. It is the " Data" Silly!!

So we have attempted to spot trends and understand the drivers that our clients go after. So this notion of stateless applications, which promotes the "farm' like topologies and isolation of application tiers with more fragmented business tiers starting from connectivity tier in the edge (Xi50) and further fragmentation in data tier --- from Scalable cache tier to MDM - Master data Management --- improvements in database technologies and storage technologies (IBM's Easy Tier announcements) --- All point to just trying best to manage Data. Here are some pointed evidence:

1. IBM is looking at tighter integration between connectivity and scalable caching -- this is to introduce caching in connectivity tier. --> Speed
2. IBM announced "Easy Tier" -- An improvement in Storage technology -- with smart offload and on-load to a SSD from normal HDD. --- To improve the speed of data access ( which XC10 does since inception).
3. IBM Watson offering to ensure accurate analysis leading to a better decision platform --- data driven decision making!
4. IBMs BigInsights and Bigdata which power the IBM Watson for data processing -- If I can process data faster then I can make better/faster decision.
5. Amamzon's Kindle-Fire -- this uses Silk browser which uses powerful caching to ensure that U only get relevant data.



The Idea: The ability for an organization to handle/process data will leads to better business decisions, and with exponential growth in data, the paradigms around traditional 3 tier topologies change -- Scalability drives the need for parallelism and hence the fragmented yet 'specialized' tiers --- All these micro-movements have significantly changed the middleware landscape. I think we ought to change with it.. we ought to embrace the change

thoughts?

Nitin