Wednesday, December 21, 2011

Changing Landscape of Middleware

Changing Landscape of Middleware:

Lately I have been engaged with many clients that are maniacally focused on reduced costs by means of reduced footprint. This post, I attempt to discuss some of my thoughts and experiences towards this trend:

  1. Growth poses a problem: As business grows so does the resulting infrastructure and primarily the middleware which houses the business logic and in some cases this tier shares the presentation or the store front with business logic. So with growth in business either due to larger client base or new business model implies similar growth in middleware. This consistent growth poses a few challenges some obvious ones include costs – not only of software but also hosting infrastructure and hardware. Some non obvious challenges include manageability of platform i.e. general administration, handling performance and Service level agreement (SLA) and addressing scalability.

  1. Consolidation – an Answer? - Many clients have taken an approach of consolidation to address this issue. Now this consolidation can come at many levels including (but not limited to) data center consolidation, IT & middleware virtualization, automation – for installation and configuration. Many clients are even as bold in claiming this effort towards the goal of being ‘cloud’ ready. However, I think that virtualization can only offer so much, at some point to achieve better resource economies clients and leaders have to think about a better design, understanding of business and user behavior and promoting design that not only appeals to the end user but also nudges the user towards a certain desired behavior.

  1. Design: I have always advocated dedicating significant amount of time in design phase. While a design phase may not produce a sizable amount of tangible application artifacts, but it does enable a better design pattern for future improvements and upgrades. (More on this in later).

  1. Changing Landscape of Middleware: What I am seeing and we have enabling technologies seeping into enterprise infrastructure to make this possible is the Movement of content in the outer tiers. So one way to address the scalability of middleware processing is to not let the request traverse to middleware tiers until absolutely necessary. This can be done in many ways, and one can be creative on how they accomplish this – this is where the application design comes in. Here is an example
    1. Many clients push as much content to ultra edge or the public accessible content domain. For example Akamai content network. An example of this type of content is generally static content. This enables faster access to site and catalog content, and has a high user satisfaction rate. This also enables the ‘window shoppers’ from consuming your precious middleware cycles.
    2. Caching at edge tier – now not all content can be cached or served from ultra edge, and some content such as domain specific content, some page fragments and jsps etc can be cached at the edge tier (which is usually behind the firewall), this does a great deal in saving processing costs of middleware presentation tier.
    3. Caching at middleware tier: Patterns like side cache and in-line database cache are further instrumental in reducing resource usage such as a db lookup, db connection and in-memory access enables faster access to various type of content.

  1. The Idea: By caching strategically at many tiers, we are trying to offload processing to various tiers and ONLY dedicating processing in middleware when it is most important or the ‘window shoppers’ now mean business, we will dedicate our cycles to those business meaning clients and service them better with an enhanced experience.

Challenge: I discussed the Design phase, the challenge is to ensure the application design that is modular enough to enable these various tiers of caching and still present a unified front, where the end user is oblivious of the inner working of the application that has its content derived from various layers. An intentional design will enable the content and business logic to be isolated, thus enabling caching at various tiers.

Enabling technologies:

  1. Ultra Edge caching – Akamai content network
  2. Edge caching – edge caching appliances ( such as IBM XI50, XC10), in memory data grid at the edge (IMDG – such as WebSphere eXtreme Scale (IBM) , Coherence (Oracle)
  3. Side Cache and Inline database buffers - such as WebSphere eXtreme Scale (IBM) , Coherence (Oracle)
  4. Smart Routing – IBM XI50, IBM AO, F5, Cisco etc.

Thoughts?

J

Nitin

Thursday, December 15, 2011

Understanding Mobile Landscape -- Let's Dissect it! AND what is MEAP?

Last week post, I mentioned:

Mobil Space– This is an interesting space, and I think we are rightly focused on this space. From middleware perspective, I think our focus should be on including numerous mobile based services such as a location based service/Mobile payment and mobile gateway services, security.. And more. I will continue to focus on this space, as I think this is current, relevant and area where we still have time to make our mark with respect to Application infrastructure.

This week I would like to focus on Mobile Space, in this post my attempt is to dissect the mobile space and open a dialog that fosters an understanding of mobile landscape and how and where I think we fit in…. and as always I welcome your comments and thoughts:

But before I jump into dissection and deconstruction of various components of enterprise mobile landscape, here are some compelling stats that will force us to think:

1. Over last year alone 3000% of WW mobile traffic ballooned. – Includes Voice and data
2. This space is expected to grow 40X over next 5 years
3. 11% of the world owns a tablet ( now we have 89% of 6+ billion and growing)
4. 90% of the world has access to the mobile networks ( well this amazed me as 90% of world does not have access to clean water and food, but has access to mobile network)
5. 2/3 of us get most of our news on mobile devices
6. 76% take pictures and video on mobile devices
7. in past 2 years mobile ready sites have grown from 150,000 to 3 million
8. 11 billion apps in 2010
9. On commerce front – this year $6.2 billion purchase were made from Mobil device --- compare that to $54.2 billion black Friday sales
10. If mobile users would be nation, it would be largest nation….. And I could go on…

The point : Mobile space is disruptive, engaging, controversial, interactive and international all the same time. Given the reach of mobile devices and recent innovation in this space, has certainly got the attention of business leaders and entrepreneurs alike. So let’s take some time and understand various components.

1. Basics : 3 components :

Device -<-------> Network <-----------> Enterprise
OS ATT/Verizon Design/develop/emulate
Install app Content gateway Host/push updates
Updates/transactions Carrier Services enablement

If we look at core components, we as application infrastructure/middleware folks fit into Enterprise space, however, there are a few things to ponder:
a. With Tight integration between the device manufacturer and service provider there is a set of dependency and new value propositions that is fulfilled by the Network or service provider. So for instance ATT/Verizon/Vodaphone/Orange etc are in better position to provide value add services as they are closest to the clients or end user, and have a larger degree of control. It is therefore conceivable that these service providers will also be in business of mobile application hosting space and other value add services – security etc. – We have seen this with UPS/FEDEX – with our laptop/system repair and parts replacements.
b. So we can see ourselves - middleware folks/vendors working with service providers and helping enterprise to position themselves to take advantage of the new markets created.
c. And if we are innovative with our offerings we can also envision the service providers the ATT, Verizon’s of the world partner with us to better enable the enterprise reach their target audience in Ocean of new consumers and services.

2. Players and Technology : So this newly emerged market also have new and non traditional players that can potentially compete with us or provide technology and services that we have traditionally enabled.

a. Google/Motorola /Apple etc– Are not only in device making business but also provide the software, applications and services on the mobile devices.

b. ATT/Verizon/Vodaphone/Orange etc – Carrier and service provider now also into application hosting and service delivery, since they are closest to the network and client.

c. iPass, ATT Mobile solutions, Mobileron etc are relatively new players that provide the services, ‘cloud’ based hosting and access --- so this gets even interesting as many of these new services are a marriage of “cloud paradigm” and “Mobility platforms”.

d. Technology: Now few things we all should know as middleware professionals are terms like HTML5, DOJO Toolkit, REST, dojox mobile, java scripts, and optimized solutions such as the device and how things like local storage can assist with asynchronous communication with application hosted in enterprise or cloud.

e. Discerning between UI and back end technology – we as technical sellers/consultants should be able to discern between what is on the device that is UI, Type of application – native/hybrid and web based and the back end technology or engine that supports the application. This also means now our sizing, network estimates etc are not traditional conversation.

3. What is MEAP ? - MEAP stands for Mobile Enterprise Application Platform So I think it is important to understand this Device and carrier agnostic framework which is recently adopted term and quite popular on the Mobile street! I also think this framework will allow us to break ice, demonstrate understanding of mobile enterprise space, and above all help discern where we stand. From all my readings MEAP has 5 elements:

a. Defining Development Environment – Application development for enterprise and device app.
b. Connector for Integration with other services – Allows for specific data models.
c. Application Management Infrastructure – Traditional environment
d. Runtime on SmartPhone/Handheld device -
e. Gateway Infrastructure -- This sits between device and back end systems

I think, and would love to hear your thoughts that we fit in at Application Management Infrastructure and Possibly in Gateway infrastructure is we partner with the service providers.

I think this is a good start for discussion on Mobile landscape, and Please add to this discussion.

:)
Nitin

Friday, December 9, 2011

WXS competes with Oracle Coherence

The purpose of this discussion is to compare the IBM WebSphere eXtreme Scale (WXS) product with Oracle Coherence, specifically their architectural differences. While on the surface they may seem interchangeable, digging beneath to understand the fundamental architecture is required to make a better choice and decision

Basically:

I do not focus on feature gap, sooner on later we catch up and one up each other.

I focus on core architecture differences such as

a. Server to Server Communication - WKA - Well Know Address/design based on multicast which is more of an after thought. WKA disables multicast which limits scalability in Coherence architecture.

so while performance should be comparable we all access data from memory, but where we should differ due to architecture is scalability, we should scale easier and painlessly than oracle coherence... which is an important yet subtle distinction.

b. Client to Server Communication : Oracle Coherence requires gateway processes for TCP Extend to work that incurs significant CPU and network latency hops. Also this is more of an after thought.

c. I would ignore any performance claims, because at the end of the day we all access data from memory, --- If someone presents this to me, I would want details and would want to know why? What is so special that gives anyone the X perf advantage?

Coherence one up WXS with C++ integration– which WXS will soon address, however I still think that a common grid between Java/.Net and C/C++ application is utopian, and not practical.

Net-Net : WXS competes on Scalability claims and NOT Performance, Superior Architecture and feature comparison, and Thought leadership and not historical presence.

Thoughts?
:)
Nitin

Wednesday, November 23, 2011

Datapower Service Cache with with IBM WebSphere XC10, Usability --- aah that is not important! lets focus on release date instead!

Folks it has been a few weeks since I have shared my thoughts…that is because Retail industry is keeping us all busy to get ready for Holiday shopping season! After all the work we put in, I am not sure that many of us will have time to shop… :)

I would like to share the following thoughts, and I do look forward to your thoughts!

I. We should focus on Usability:

This week I would like to focus on Usability, and I have always admired Apple products and as I read on the development of many of Apple products such as iMac, iPod etc they all have one thing in common and that is ease of use… and the results are evident that these products are not only technically sound but have viral adoption pattern…world loves it, and will pay a premium for it! So why not adopt the same design principles in our own products!

The most common complaint that our clients have is that what ever feature they desire it is always slated for next release…. To me that is not leadership! I am compelled to write about many products that would look like if it were to adopt “Steve Jobs” driven – Design principles and product development lifecycle. And while the development lifecycle make take longer ( and hence the cost), a better design speak volume of products, thought leadership and not to mention the savings in support costs which is far greater that initial investment in design.

So I question, and would love you feedback… why are release deadlines so important? That we are willing to release a half baked product and willing to compromise the quality for fixes in future date?

II. Topic in Focus: Datapower Service Cache with with IBM WebSphere XC10

The XC10 and XI52 Integration are compelling. The idea behind the XI52-XC10 integration is to enable faster lookup of cached data and reduced processing at backend tiers, with use of XC10 as a general purpose caching appliance. The XI52-XC10 integration has opened up many possibilities to economize on infrastructure costs and further leverage XC10s caching mechanism to speed up XI52 Authentication process. In simplest terms this integration amalgamates the better of two breeds of appliances. This integration speeds the lookup and processing done by XI50, by extensible and scalable cache enabled by XC10.

Main Points to Note:

  1. Address Scalability Challenge - core of XC10 business value.
  2. By off-loading the storage/look-up and retrieval of XML documents in XC10, we are also offloading the processing to a tier parallel to XI52/XS40, this does two things:
    -Speeds up the response -- hence improves performance.
    -Prevents requests going to back end 'middleware' tiers -- addressing scalability.
  3. XC10 can be scaled as need for data grows - classic WXS/XC10 (collective) feature for Scalability proposition.
The idea behind the XI50-XC10 integration is to enable faster lookup of cached data and reduced processing at backend tiers, with use of XC10 as a general purpose caching appliance.

Have a great thanksgiving weekend!

:)
Nitin

Tuesday, November 1, 2011

Why Innovate?

So I have been busy and pondering over this questions on Why Innovate?
We have many large corporations that have Innovation somewhere in their values or catchy Slogan. However, true Innovation comes from -- I think, necessity , one has to foster the culture of innovation by allowing failures, and that marks the cornerstone of an 'experienced' and progressive set of values.
I want to explore this with a n example... Amazon once a largest bookseller, came up with with EC2 or Amazon cloud, while technologies powerhouses are still struggling to define themselves in the 'cloud play'.. why is that a book company was able to envision a computing platform while selling books and organizations whose core business was computing could never lead with it? -- Vision.

I think It all boils down to vision and execution... and yes I think failures goes along the way... but what a vision brings is amazing things like iPOD, iPAD and Amazon which is hosting many things from applications to platform for netflix to stream movies.
I still think that with middleware in cloud, the industry is focused on provisioning and automation, and missing the bigger picture.. where there is no focus on utility model, or application design that includes amazing capabilities to linearly scale...speedier access by employing caching technologies, smarter awareness of me the customer --- I do not think we need a lot of money or investment into these insights, we just need the will and reason to innovate!!

Thoughts?

:)
Nitin

Saturday, October 15, 2011

Understanding cloud provisioning and Deployment Components

Understanding cloud provisioning and Deployment Components

In this post I attempt to deconstruct -- what is hyped by cloud deployment, which is essentially a provisioning automation of a virtual image in a virtualized environment. --- U want to call that cloud ---so be it...but I would be cautious of classifying simple provisioning as 'Cloud'


I. cloud provisioning:

I use IWD as an example - IBM Workload Deployer is a WebSphere centric provisioning automation tool with an appliance form factor. Like any appliance IWD is also a computer and posses similar constraints like finite capacity, network dependency and processing constraints. It is important to understand that, as various parallel actions can stress the application performance of the appliance itself. For instance several parallel large deployments will increase network activity and in absence of a robust network can cause the appliance to slow down.

a. Logs: The log files are stored on the appliance. The viewable logs can be viewed directly from the appliance using the user interface or they can be downloaded to your local file system for review. You can also collect your log data using the command-line interface.

b. Audit Logs: The audit data can be viewed directly from the appliance using the user interface or they can be downloaded to your local file system for review. These logs provide audit type information such as a user specific activity.

c. Connectivity to Core Infrastructure: Ensure that IWD is able to connect to Infrastructure such as a DNS, NTP, LDAP AND Deployment Target etc.

d. Supported levels of SW and Virtual environment: Always refer to current levels and features of various virtualized ‘deployment’ environments.

II. Network:

Every enterprise has a unique network infrastructure for accessing servers and allowing applications to communicate between components. Various layers support the management of network addressing, deliver critical services, and ensure security. The infrastructure includes specific addressing (sub-nets), address services like DHCP/DNS, identity and directory services like LDAP, and firewalls and routing rules – all reflecting the specific requirements and evolution of the given enterprise.


a. Understanding VLANs: Many network administrators are probably using VLANs without you knowing, as there are many benefits when running a large network.

  • Limits packets to a subset of the company network backbone
  • Limits pointless network flow of packets between non-server machines
  • Increases performance
  • Provides security -- using VLANs to limit hostile connections (for example, direct Internet connection)
  • Allows you to monitor network traffic for planning

b. Latency: In networking, bandwidth represents the overall capacity of the connection. The greater the capacity, the more likely that better performance will result. Bandwidth is the amount of data that passes through a network connection over time as measured. Network tools like ping and traceroute measure latency by determining the time it takes a given network packet to travel from source to destination and back, the so-called round-trip time. Round-trip time is not the only way to specify latency, but it is the most common.

Factors that contribute to network latency

  • Transmission delays: Transmission refers to the medium used to transmit the information. This may be a phone line, fiber optic line or wireless connection, just to name a few examples. Each will contribute to the delay in some way. Some may be faster mediums than other. To help reduce network latency, it may be possible to change the medium to a faster type.
  • Propagation delays: Propagation is one of the harder things to control in network latency. Simply put, this is the physical distance between the origin and destination. Naturally, the greater the distance, the more delayed the transmission will be. However, this does not usually cause a significant delay.
  • Routers and computer hardware delays: The other contributors to network latency, routers and computer hardware, may be able to be changed. In those cases, upgrading this hardware can help process information faster, thus speeding up the process. While this may involve a substantial investment, the benefits may be worth the investment; depending on how much and how often data is transferred

Tools: Ping test, traceroute, netstat, lsof

e. Security: Network security and security policy in general is complex topic and often times require longer discussion on either easing current security policies or working around them to demonstrate the IWD features. Security policy includes creating of usage policy rules and procedures that all IT players adhere to. Security policy includes established the access levels such as super admin, admin, backup operator, user etc. By assigning appropriate resource access levels restricts access to critical resources only to authorized personnel. Firewalls, proxy servers, gateways, and email servers need to be given highest levels of security.

The security provisions typically include the following:

  1. Firewalls, proxy servers, or gateway configuration
  2. Access Control Lists (ACLs) formation and implementation
  3. SNMP configuration and monitoring
  4. Security hot fixes to software of various devices, operating systems, and applications.
  5. Backup and restore procedures

Tools: Nmap, Trace route

III. Deployment Targets:

a. Virtualized Environments: A traditional data center offers finite capacity in support of business applications, but it is ultimately limited by obvious constraints (physical space, power, cooling, etc.). Virtualization has extended the runway a bit, effectively increasing density within the data center, however the physical limits remain. Cloud computing opens the door to huge pools of computing capacity worldwide. This “infinite” capacity is proving tremendously compelling to IT organizations, providing on-demand access to resources to meet short and long-term needs. The emerging challenge is integration—combining these disparate environments to provide a seamless and secure platform for computing services.

b. Permission and Access control: The flexibility offered by virtualization centric cloud deployments have enabled resource pooling but also have increased security risks. Systems that were previously physically isolated, due to virtualization share resources, and this presents the opportunity to new attacks in an environment that enables use of shared resources and pooling of such. To combat this network and platform administrators have resorted to tighter security policies. These restrictive policies while intend to protect the shared infrastructure, adds complexity to IWD based deployments. Hence it is vital to understand the underlying security, permissions and access control of the deployments environment.

The authorization to perform tasks in most virtualized platform is governed by an access control system. Any access control system allows the administrator/Management to specify with granularity the users or groups can perform tasks on various objects. It is defined using three primary concepts:

Privilege — the ability to perform a specific action or read a specific property. Examples include powering on a virtual machine and creating an alarm.

Role — A collection of privileges. Roles provide a way to aggregate all the individual privileges that are required to perform a higher-level task, such as administer a virtual machine.

Object — an entity upon which actions are performed. E.g. VMWARE VirtualCenter objects are datacenters, folders, resource pools, clusters, hosts, and virtual machines.

*Important* We need to ensure: That the user id defined on IWD has the right privileges, roles and access to objects that are necessary to deploy a VM/HVE to the target host, and access to objects such as folders – so we can write to the disk, and can stop and start the virtual machines.


These may seem obvious things to consider, however, I wanted to add and focus on complexity this poses to the traditional ways to provision middleware. Well, no pain no gain, and as many organization embark on this journey of provisioning automation or what some have artistically labeled private cloud... this change induced can be 'disruptive' and should be factored in before boarding the ship to IaaS/Paas land...

Thoughts?

:)

Nitin

Saturday, October 1, 2011

Devising equitable chargeback models

I have always thought that besides technology an equitable charge-back model is a equally important for a shared infrastructure --- which at some point is what cloud computing is based upon...

In many organizations, both large and small, the methodologies used in devising a charge-back model are overly simplistic and dated. New technologies and application development patterns either render these models inadequate for today or not reflective of how resources are actually used by applications or business units. Devising an equitable charge-back model can be the most complex of all challenges involved in adopting Virtual Enterprise. Charge-back models are complex for many reasons, one of which is the ability to arrive at a cost model that is agreeable to all business units, another being the complications of formulating the fixed and variable cost distribution of the underlying shared infrastructure. In some cases, this could even lead to obtaining a business analyst just for this task. Let's look at some charge-back considerations for ways to simplify the charge-back model.

Charge-back models

First, what exactly does "chargeback" mean? In consolidated environments, the IT custodial service employs a cost recovery mechanism that institutes a fee-for-service type of model. Chargeback enables IT to position their operations as value add services and uses the cost recovery mechanism to provide varying degrees of service levels at differentiating costs. To devise an effective chargeback model it is imperative that the IT organization have a complete understanding of their own cost structure, including a cost breakdown by components used as resources, a utility-like model to justify the billing costs associated with resource use. When it comes to employing chargeback models, there is no silver bullet that will address all the perceptions and user expectations from an IT services commodity model. There are several models prescribed and practiced in the industry today, and each will be a different cultural and operational fit for an organization. A few chargeback models are described below, but a hybrid model combining select features of more than one model is also an option.

  • Standard subscription model

    The simplest of all models, this one involves dividing up the total operating costs of the IT organization by the total number of applications hosted by the environment. This type of cost recovery is simple to calculate, and due to the appeal of its simplicity, finds its way into many organizations. The year to year increase in IT costs due to growth and expansion is simply added to the subscribers̢۪ costs. While this is a simple chargeback model, it is fundamentally flawed, as it promotes subsidy and unequal allocation of resources (with this model, a poorly performing application is subsidized by other applications) and de-emphasizes resource consumption and application footprints.

  • Pay per use model

    This model is targeted for environments with LOBs of various sizes and, unlike the standard subscription model, emphasizes charging based on an application̢۪s actual consumption of resources and SLA. For example, an LOB might pay more for shared services simply because its application requires a larger footprint, or because they desire a higher degree of preference, more dedicated resources, or a more demanding service policy. This model can be complicated in its approach, simply due to the framework around resource usage and monitoring, but it ensures fair and equitable cost recovery. The downside is that it might take longer to arrive at agreeable metrics and cost models associated with resource consumption.

  • Premium pricing model

    The premium pricing model focuses on class of service and guaranteed availability of resources for business applications. As the name suggests, LOBs will incur a premium for preferential treatment of application requests and priority in resource allocation during times of contention to fully meet the service goals. This could also include a dedicated set of hardware nodes to host applications, so depending on the degree of isolation and separation from the shared services model, the cost can increase. Such models are often preferred by LOBs with mission critical and high revenue impact applications. This model will typically coexist with other baseline chargeback models, since not all applications or LOBs in an organization will require premium services.

  • Sample hybrid model

    The hybrid model combines the advantages of multiple chargeback models to best suit an organization. For instance, a hybrid model could have a flat entry fee per application to cover the cost of the base infrastructure, and then pay for actual resources consumed. The flat fee can be characterized as a fixed expense to be hosted, and the additional costs for resource consumption can be linked to variable cost. This example combines the standard subscription model and the pay per use model into a utility-like billing service. Since there is no single chargeback model that will fit all environments, it is common to see several model types combined.

While there are advantages of using one chargeback model for an entire organization, there might also be value in introducing a catalog of chargeback models and offer a choice to LOBs. This type of flexibility enables an LOB to select the most suitable chargeback model for their needs, and avoids the challenge of getting all business units to agree on one model. However, multiple chargeback models only work if all models are comparable in fairness and value; all things being equal, the same application should have same operating cost under every model, but differing costs when varying volume, Quality of Service, chosen service policy, and so on.

Thoughts?


:)

Nitin

2. It is the " Data" Silly!!

So we have attempted to spot trends and understand the drivers that our clients go after. So this notion of stateless applications, which promotes the "farm' like topologies and isolation of application tiers with more fragmented business tiers starting from connectivity tier in the edge (Xi50) and further fragmentation in data tier --- from Scalable cache tier to MDM - Master data Management --- improvements in database technologies and storage technologies (IBM's Easy Tier announcements) --- All point to just trying best to manage Data. Here are some pointed evidence:

1. IBM is looking at tighter integration between connectivity and scalable caching -- this is to introduce caching in connectivity tier. --> Speed
2. IBM announced "Easy Tier" -- An improvement in Storage technology -- with smart offload and on-load to a SSD from normal HDD. --- To improve the speed of data access ( which XC10 does since inception).
3. IBM Watson offering to ensure accurate analysis leading to a better decision platform --- data driven decision making!
4. IBMs BigInsights and Bigdata which power the IBM Watson for data processing -- If I can process data faster then I can make better/faster decision.
5. Amamzon's Kindle-Fire -- this uses Silk browser which uses powerful caching to ensure that U only get relevant data.



The Idea: The ability for an organization to handle/process data will leads to better business decisions, and with exponential growth in data, the paradigms around traditional 3 tier topologies change -- Scalability drives the need for parallelism and hence the fragmented yet 'specialized' tiers --- All these micro-movements have significantly changed the middleware landscape. I think we ought to change with it.. we ought to embrace the change

thoughts?

Nitin

Wednesday, September 21, 2011

Why I think cloud computing should be considered a “platform” by adopters!

In this post attempt to explore various components that collectively define cloud computing as a platform. It is to be noted that I have deliberately used the term platform, in this context to highlight the importance of services extended by each layer to enable the notion of unlimited available resources. While virtualization technologies enable efficient use of resources, by marginally over-committing computing resources, with an assumption that not all partitions of a system will consume all resources at all times. This assumption is further accentuated by a policy driven approach that factors in resource prioritization at times of contention, by allocation resources to most important partition or application. While virtualization can be seen as a foundation for cloud computing platform, virtualization alone cannot be and should not be mistaken for a cloud computing platform. As a platform cloud computing attempts to address several IT operational and business including (but not limited to):

a. Escalating costs of hosting ‘over-provisioned’ and application specific

environment.

b. Reduced ‘ramp-up’ time for provisioning hardware and software resources

for various environments (Development, Staging and production).

c. Cost Effective solution by re-use of software and hardware assets.

d. Reduction in support and maintenance costs with use of standardized (at

times virtual) images.

So it is evident that the over arching goal of cloud computing platform is to provide a cost effective solution to the end user with tremendous flexibility and agility. The flexibility and agility thus become important facets of any cloud platform, as these concepts are basic to the intrinsic value enabled by cloud of resources. To achieve the elasticity and agile on demand provisioning thus requires a system that is not only self sustainable but also sensitive to growth. Let’s explore what this means. A true cloud platform provides an illusion of infinite computing resources available on demand. The notion of on demand infinite computing resources requires a systemic approach that includes sense and response subsystems, with a tie into a system level monitoring subsystems which is front ended by a rich user interface and all tied together by a robust governance sub system. These sub systems can be further classified as a complete and inseparable components of a cloud computing platform. We will discuss them in detail in later posts, but it is important to understand the relevance of these sub systems as vital design imperatives of a cloud computing platform. Cloud computing is a new consumption and delivery model nudged by consumer demand and continual growth in internet services. Cloud computing exhibits the following at least 6 key characteristics:

a. Elastic Environment

b. Provisioning Automation

c. Extreme Scalability

d. Advanced Virtualization

e. Standards based delivery

f. Usage based equitable chargeback

I thus, deliberately use the term platform in context of cloud computing

environment that facilitates flexibility, robustness and agility, as a systemic

approach in providing a stage to hosting applications without the concern for

availability or provisioning of underlying resources.

Sunday, September 11, 2011

Changing Landscape of Middleware

Lately I have been engaged with many clients that are maniacally focused on reduced costs by means of reduced footprint. This post, I attempt to discuss some of my thoughts and experiences towards this trend:

  1. Growth poses a problem: As business grows so does the resulting infrastructure and primarily the middleware which houses the business logic and in some cases this tier shares the presentation or the store front with business logic. So with growth in business either due to larger client base or new business model implies similar growth in middleware. This consistent growth poses a few challenges some obvious ones include costs – not only of software but also hosting infrastructure and hardware. Some non obvious challenges include manageability of platform i.e. general administration, handling performance and Service level agreement (SLA) and addressing scalability.

  1. Consolidation – an Answer? - Many clients have taken an approach of consolidation to address this issue. Now this consolidation can come at many levels including (but not limited to) data center consolidation, IT & middleware virtualization, automation – for installation and configuration. Many clients are even as bold in claiming this effort towards the goal of being ‘cloud’ ready. However, I think that virtualization can only offer so much, at some point to achieve better resource economies clients and leaders have to think about a better design, understanding of business and user behavior and promoting design that not only appeals to the end user but also nudges the user towards a certain desired behavior.

  1. Design: I have always advocated dedicating significant amount of time in design phase. While a design phase may not produce a sizable amount of tangible application artifacts, but it does enable a better design pattern for future improvements and upgrades. (More on this in later).

  1. Changing Landscape of Middleware: What I am seeing and we have enabling technologies seeping into enterprise infrastructure to make this possible is the Movement of content in the outer tiers. So one way to address the scalability of middleware processing is to not let the request traverse to middleware tiers until absolutely necessary. This can be done in many ways, and one can be creative on how they accomplish this – this is where the application design comes in. Here is an example
    1. Many clients push as much content to ultra edge or the public accessible content domain. For example Akamai content network. An example of this type of content is generally static content. This enables faster access to site and catalog content, and has a high user satisfaction rate. This also enables the ‘window shoppers’ from consuming your precious middleware cycles.
    2. Caching at edge tier – now not all content can be cached or served from ultra edge, and some content such as domain specific content, some page fragments and jsps etc can be cached at the edge tier (which is usually behind the firewall), this does a great deal in saving processing costs of middleware presentation tier.
    3. Caching at middleware tier: Patterns like side cache and in-line database cache are further instrumental in reducing resource usage such as a db lookup, db connection and in-memory access enables faster access to various type of content.

  1. The Idea: By caching strategically at many tiers, we are trying to offload processing to various tiers and ONLY dedicating processing in middleware when it is most important or the ‘window shoppers’ now mean business, we will dedicate our cycles to those business meaning clients and service them better with an enhanced experience.

Challenge: I discussed the Design phase, the challenge is to ensure the application design that is modular enough to enable these various tiers of caching and still present a unified front, where the end user is oblivious of the inner working of the application that has its content derived from various layers. An intentional design will enable the content and business logic to be isolated, thus enabling caching at various tiers.

Enabling technologies:

  1. Ultra Edge caching – Akamai content network
  2. Edge caching – edge caching appliances ( such as IBM XI50, XC10), in memory data grid at the edge (IMDG – such as WebSphere eXtreme Scale (IBM) , Coherence (Oracle)
  3. Side Cache and Inline database buffers - such as WebSphere eXtreme Scale (IBM) , Coherence (Oracle)
  4. Smart Routing – IBM XI50, IBM AO, F5, Cisco etc.

Thoughts?

:)

Nitin

Friday, September 9, 2011

is Cloud just virtualization and Automation? NO!!

Day and Day out, we see every technology vendor, attempting to position themselves in cloud realm, and struggle to find a niche in this 'cloudy' topic.

I think:
1. Virtualization and Automation are building blocks of Cloud computing platform. They alone will not solve any problems

2. Cloud computing is based on the premise that it is a new model that accommodates new services delivery and consumption model. Now the term 'service' is very elusive, and I think this is what is exploited by every vendor trying to find that 'fit'.

3. Without a vision and a set expectation from 'cloud' all the investment into cloud is pointless and will create more problems rather than solutions. So the goals and vision are very important that will and should drive the investment into cloud strategy and supporting technologies.

4. Chargeback: This is so important that I think without a equitable chargeback model, the cloud initiative will be a failure. And this is because when we address the 'service delivery and consumption' the metering at the consumption end will balance the resources at the delivery end..this balance is probably most important concept! Otherwise... you will have a buffet of services and unhealthy consumers and low quality services!!

So I question myself, when every technology decision maker/investor and consumer makes a choice to embark the journey to achieve 'cloud driven' economies of scale... do they have a strategy? Or we all are consuming the hype, until the next one surfaces and then we will drop cloud and adore the 'rainmaker'!!

Thoughts?
:)
Nitin

Wednesday, September 7, 2011

How does IMDG solve the Scalability problem?

WebSphere eXreme Scale - WXS is a IMDG - In Memory Data Grid implementation


Fundamentals: How does IMDG solve the Scalability problem?

Understanding Scalability:

In understanding the scalability challenge addressed by WebSphere eXtreme

Scale, let us first define and understand scalability.

Wikipedia defines scalability as a "desirable property of a system, a network, or a process, which indicates its ability to either handle growing amounts of work in a graceful manner, or to be readily enlarged. For example, it can refer to the capability of a system to increase total throughput under an increased load when resources (typically hardware) are added."

· Scalability in a system is about the ability to do more, whether it is processing more data or handling more traffic, resulting in higher transactions

· scalability poses great challenges to database and transaction systems

· An increase in data can expose demand constraints on back-end database servers

· This can be a very expensive and short term approach to solving the problem of processing ever growing data and transactions

At some point, either due to practical, fiscal or physical limits, enterprises are unable to continue to "scale out" by simply adding hardware. The progressive approach then adopted is to "scale out" by adding additional database servers and using a high speed connection between the database servers to provide a fabric of database servers. This approach while viable, poses some challenges around keeping the databases servers synchronized. It is important to ensure that the databases are kept in sync for data integrity and crash recovery.

Solution: In Memory Data Grid or IBMs WebSphere eXtreme Scale

WebSphere eXtreme Scale compliments the database layer to provide a fault tolerant, highly available and scalable data layer that addresses the growing concern around the data and eventually the business.

· Scalability is never an IT problem alone. It directly impacts the business applications and the business unit that owns the applications.

· Scalability is treated as a competitive advantage.

· The applications that are scalable can easily accommodate growth and aid

The business functions in analysis and business development.

WebSphere eXtreme Scale provides a set of interconnected java processes that holds the data in memory, thereby acting as shock absorbers to the back end databases. This not only enabled faster data access, as the data is accessed from memory, but also reduces the stress on database.

more on this.....later.

So what is cloud computing?

As the IT operations continue to evolve and transform the business
towards agility and adaptability to ever changing rules of marketplace, the
efficiency of any IT operation is of paramount significance. The phrase ‘time to
market’ has a completely new meaning in today’s dynamic business environment
where the only constant is change. This rapidly changing environment has lead
the IT and business leaders alike to re-think the ‘procurement to provisioning’
process with one goal in mind – Efficient use of resources. These resources
include IT assets such as hardware and software, human capital such as
administrators, developers, testers, other IT management staff and facilities
employed in hosting the overall IT infrastructure. The efficiency goals are not
only towards costs savings but also are defined by the business requirement
usually driven by external market forces and availability of various enabling
technologies. Cloud computing as a platform is amalgamation of such enabling
technologies. While the concept of cloud computing is not new, efforts such as
net(work) computing and various hardware and software virtualization
technologies in the past have attempted to address the need for ‘unlimited’
resource pool capable of handling varying workloads. These efforts, while did
contribute towards a more mature cloud platform, as a singular technology it did
fall short of a vision of a true cloud computing platform.
So what is a cloud computing platform? Is it simply automated
provisioning systems coupled with a resource virtualization, where the workload
is policy driven, and resources over committed and any resource contention
handled by policy driven resolution? As it turns out technologies that provide
provisioning, virtualization and policy enforcement form the building blocks of a
true cloud computing platform, but not any one technology is a cloud offering in
and of it self.

Drawing differences between Apache Hadoop and WebSphere eXtreme Scale (WXS)

Drawing differences between Apache Hadoop and WebSphere eXtreme Scale (WXS)

Hadoop:

Apache Hadoop is a software framework (platform) that enables a distributed manipulation of vast amount of data. Introduced in 2006, it is supported by Google, Yahoo!, and IBM, to name a few. At the heart of its design is the MapReduce implementation and HDFS (Hadoop Distributed File System), which was inspired by the MapReduce (introduced by a Google paper) and the Google File System.

MapReduce: MapReduce is a software framework introduced by Google that supports distributed computing on large data sets on clusters of computers (or nodes). It is the combination of two processes named Map and Reduce.

Note: MapReduce applications must have the characteristic of "Map" and "Reduce," meaning that the task or job can be divided into smaller pieces to be processed in parallel. Then the result of each sub-task can be reduced to make the answer for the original task. One example of this is Website keyword searching. The searching and grabbing tasks can be divided and delegated to slave nodes, then each result can be aggregated and the outcome (the final result) is on the master node.

In the Map process, the master node takes the input, divides it up into smaller sub-tasks, and distributes those to worker nodes. The worker node processes that smaller task, and passes the answer back to the master node.

In the Reduce process, the master node then takes the answers of all the sub-tasks and combines them to get the output, which is the result of the original task.

The advantage of MapReduce is that it allows for the distributed processing of the map and reduction operations. Because each mapping operation is independent, all maps can be performed in parallel, thus reducing the total computing time.


HDFS

From the perspective of an end user, HDFS appears as a traditional file system. You can perform CRUD actions on files with certain directory path. But, due to the characteristics of distributed storage, there are "NameNode" and "DataNode," which take each of their responsibility.

The NameNode is the master of the DataNodes. It provides metadata services within HDFS. The metadata indicates the file mapping of the DataNode. It also accepts operation commands and determines which DataNode should perform the action and replication.

The DataNode serves as storage blocks for HDFS. They also respond to commands that create, delete, and replicate blocks received from the NameNode.

Use case:

  1. MapReduce Application
  2. querying the data stored on the Hadoop cluster
  3. Data integration and processing ( grid batch type (ETL) applications)

WXS:

WebSphere eXtreme Scale compliments the database layer to provide a fault tolerant, highly available and scalable data layer that addresses the growing concern around the data and eventually the business.

· Scalability is never an IT problem alone. It directly impacts the business applications and the business unit that owns the applications.

· Scalability is treated as a competitive advantage.

· The applications that are scalable can easily accommodate growth and aid

The business functions in analysis and business development.

WebSphere eXtreme Scale provides a set of interconnected java processes that holds the data in memory, thereby acting as shock absorbers to the back end databases. This not only enabled faster data access, as the data is accessed from memory, but also reduces the stress on database.

WebSphere® eXtreme Scale is an elastic, scalable, in-memory data grid. It dynamically caches, partitions, replicates, and manages application data and business logic across multiple servers. WebSphere eXtreme Scale performs massive volumes of transaction processing with high efficiency and linear scalability, and provides qualities of service such as transactional integrity, high availability, and predictable response times.

The elastic scalability is possible through the use of distributed object caching. Elastic means the grid monitors and manages itself, allows scale-out and scale-in, and is self-healing by automatically recovering from failures. Scale-out allows memory capacity to be added while the grid is running, without requiring a restart. Conversely, scale-in allows for immediate removal of memory capacity.

WebSphere eXtreme Scale can be used in different ways. It can be used as a very powerful cache or as a form of an in-memory database processing space to manage application state or as a platform for building powerful Extreme Transaction Processing (XTP) applications.

Use Case:

1. Extensible network attached cache

2. In memory data grid

3. Application cache ( session and data)

References:

  1. http://www.ibm.com/developerworks/aix/library/au-cloud_apache/
  2. http://wiki.apache.org/hadoop/
  3. http://publib.boulder.ibm.com/infocenter/wxsinfo/v7r0/index.jsp?topic=/com.ibm.websphere.extremescale.over.doc/cxsoverview.html
  4. http://hadoop.apache.org/hbase/

Why Middleware in clouds?

Folks!
I have been working in Middleware realm for many years, and I figure I blog about my experiences and thoughts, and see what you all think.
All the posts on this blog are my thoughts and I kind of take full responsibility for it!

Enjoy and do provide your feedback!

:)
Nitin