Friday, March 14, 2008

Greener Server Virtualization

I am the Architect over a project to deploy ESX 3.5 in our environment. The deployment consists of 45 dual slot quad core blades with 24GB RAM EACH. Most people in technology understand why consolidating servers in a virtualization platform is a good idea. The concept is one that inherently saves power, rack space, and time. I want to touch on some of the new features of virtualization and some new or existing products and processes which take server virtualization to the next level in reducing power consumption.

1. Servers. Specifically The HP C Class Blade servers. These servers have built-in power reduction features that significantly reduce the power consumption of the servers.

The first feature intelligently shuts down power supplies and maintains peak load in a 16 blade enclosure with only 5 power supplies at over 90% efficiency.

The C Class blade enclosure also uses 10 miniature jet engine fans to reduce power consumption and move more air. These fans are hot-swappable. I hear HP has dozens of patents on these fans.

Virtual Connect helps companies reduce the cables required to run a blade enclosure while still enabling robust versatility on the network and SAN. This technology reduces the overall port count for network and SAN significantly in your datacenter.

Overall the C Class blades should reduce your overall power consumption by 40% when compared to similar rack mount servers. HP offers power monitoring software in the Onboard Administrator which allows you to monitor the power usage of the enclosure.


2. Deduplication. Deducplication technology when applied to your primary and backup storage can greatly reduce the storage required to run and backup virtual machines. Deduplication in conjunction with server virtualization can reduce your data footprint by up to 90%. I am very intrigued by these numbers and the technology. I am interested to see how dedupe impacts VM performance, and the real world reduction of data. In the mean time, I am keeping a close eye on this technology, especially NetApps implementation.

3. Vi (AKA ESX Lite, embedded). The new Lite version of ESX is capable of running on a 64MB flash drive. Combined with blades, you can remove one of the largest consumers of power from the server.

4. Distributed Power Management (DPM) - DPM works in conjunction with Distributed Resource Scheduler (DRS) to power off unneeded servers in an ESX Farm. It can power servers off and on as needed to conserve power when resources are not needed.

5. Disaster Recovery Tools - I have been testing PlateSpin's PowerConvert for disaster recovery of virtual machines. PowerConvert allows you to back up a physical or virtual server to a powered off VM at a secondary site. On a scheduled basis it continues to run on the primary server sending differentials to the backup server while it is powered off. Since the backup software updates the backup VM while it is powered off, your DR hardware can be utilized to run non-production servers. This eliminates the need for dedicated DR hardware.

Summary: Virtualization is definitely a tool that when combined with the right hardware and software can assist your company with its green initiatives. Virtualization is not the entire story, and should be used as a point solution which is part of a larger green initiative.

Monday, April 30, 2007

Methodologies Smethodologies?

Over the past 6 months I have spent a significant amount of time reviewing different methodologies for operations and infrastructure design. Specifically, I was seeking to improve the Architecture and Engineering process for Infrastructure in my department. Many frameworks or methodologies focus on the entire IT process including change management. I am not interested in deploying such a large framework, although the company I work for is currently moving toward the ITIL framework.

Ok, I am rambling here. What am I talking about? What I am getting at, is choosing the right methodology for your environment. After looking at ITIL, MOF, WSSRA, PBA, and several others, I determined that no one framework was going to give me what I needed. I evaluated the requirements and found that I could take components of each methodology, in order to reach my specific goal. I took the document framework from WSSRA, questioning and impact analysis from PBA, and an understanding of ITIL in order to make sure that my new process fit into the companies larger initiative of deploying ITIL.

Solving your organizations problem cannot always be done by choosing the flavor of the month methodology from a magazine. You will need to understand your requirements and goals, in order to choose the right solution for your company. This could lead to a single methodology, or a combination of multiple methodologies. In my instance, I was able to standardize our architecture and engineering process by taking components of multiple methodologies.

Tuesday, September 26, 2006

Object Oriented Storage: Fad or Future?

Object Oriented Storage: Fad or Future?

I have been investigating platforms for digital storage repositories. This is a subject I have spent a significant amount of time on over the last 3 years.

Email Archiving and Document Management systems are hot technologies right now. They help to solve a growing need for structure around data that is traditionally stored in a very unstructured way. These systems are built upon the concept of Content Addressed Storage. If you know about storage, you have seen the CAS acronym all over the place. With the explosion of growth and the increasing pressure of compliance, companies are adopting systems managing document and email repositories at a rapid rate.

I am not stating anything new here. Why is this relevant? At its core, CAS is an object oriented storage platform. You upload something, and you receive a UID for that file. You can now access that file through a proprietary interface (sometimes a non proprietary interface with proprietary web service calls). Amazon S3 is an excellent example of basic CAS functionality. http://www.amazon.com/gp/browse.html?node=16427261 . S3 provides a unified programmatic method of accessing files, regardless of file type. The CAS platform has grown because vendors have added functionality for search, archive, WORM, email platform integration, and compliance. These are not core functions of CAS, they are services that help CAS to solve business problems. CAS provides the framework for providing these services.

Back to my problem… How does CAS fit it as a repository for media? Vendors would say it is a great fit. I am not sold yet. The CAS platform is more mature in the email, compliance, and document management verticals. This is not to say that CAS is not the future. Conceptually, this is the right approach, but I have yet to see an implementation that is right for large media repositories. I believe the platform lacks integration with DAM systems, storage independence, and the scalability needed for high throughput environments. I believe it is coming; it is only a matter of time before CAS vendors target this market.

In summary I believe CAS will become the defacto standard for all large repositories of file storage regardless of type.

Monday, September 25, 2006

I am Back....!!!

I am back. Well, after many months on hiatus, I am back in full force. In the last several months I sold a house, moved to an apartment, built a house, and had a baby (in that order). I am back in full force, and I have dozens of blogs in my mind. If you are one of the six readers of this site, check back often. I will be posting every few days…

My son was born on August 26, 2006 we named him Cael Andrew Kusky. The last month has been absolutely amazing!

Best Technology vs Right Technology
In the enterprise, the best technology does not always win. As an Architect, my job is to architect the right solution for a project. Many times projects will be influenced by non-technical factors that will sway the technology direction of a project. What am I talking about? This is not a post to slam office politics, or insinuate that companies are receiving kickbacks. It is a post to discuss the non technical factors in a project that will affect the technical decisions that are made.

Let me give an example. Thin Provisioning is all the rage in the storage industry now. Only a few products have this feature, and the vendors tout it as the next big thing? What is thin provisioning…read up on it here (http://www.lefthandnetworks.com/press/index.php?article=esgblogs_051206&aid=178 )In referenced blog there is a decent argument for and against thin provisioning. As an Architect, I see the potential for thin provisioning to save my organization money. We typically buy storage with 2-3 years growth in mind, it would seem thin provisioning could save us thousands of dollars on every project. However, our accounting model for storage does not allow for this. Our projects purchase storage up front, and we do not have the financial flexibility to purchase storage as the applications grow. In organizations where storage is charged back to user groups as it is consumed, thin provisioning could be a significant source of cost savings, however organizations that purchase storage in large quantities may not be see the same cost savings. This is an example of how a financial process affects the impact of a technology.

As architects we are tempted to try and implement the latest and greatest technology. In reality we need to take into account organizational maturity, business process, impact on support groups, and many other factors that may influence the decision to choose a specific technology. It is recommended that you analyze your environment, and choose the right technology for your environment vs the best technology.
Many Engineers and Architects tend to forget that technology projects in the enterprise are meant to save your company time and money. Do not lose sight of that goal.

I could post for months on this subject; however I have many other posts on my mind. There is a wealth of information on this subject on the internet. Specifically, check out the MCA program and the blog of Lewis Curtis http://www.microsoft.com/learning/mcp/architect/overview/default.mspx http://blogs.technet.com/lcurtis/default.aspx

Tuesday, April 18, 2006

SNW Spring 2006 San Diego

I returned from the Spring Storage Networking World a few weeks ago. There are a few topics that have been on my brain ever since….

1. Is ILM worth it?
I realize ILM has means different things to different people. When I mention ILM, I am referring to the automatic categorization and migration of data between data tiers. ILM and HSM continue to be written about more then they are deployed. While ILM like solutions appear to be great in theory, actual deployments are few and far between. In talking with customers who have deployed solutions, there are several challenges:

1. Categorizing data
2. Complex management and deployment
3. Product maturity.
4. Support.

At this point, I am taking a wait and see approach. As an infrastructure Architect I design systems scale over hundreds of TB's. We have demo'd and tested a few ILM/HSM like solutions. Currently it is cheaper and much easier to manage without ILM.

In the mean time I am keeping my eye on a few up and coming vendors in this space,
1. Arkivio http://www.arkivio.com/2/splash2.asp they deal with mostly unstructured data such as media and document repositories.
2. Local Alpharetta Georgia company Scentric has recently released an ILM product for structured data (http://www.scentric.com/ ).

2. NAS Benchmarking

Spec SFS is highly regarded as the premier NAS benchmark in the industry. The spec organization sells their benchmark to customers and assists them in running the benchmark. When searching for a NAS solution to store large amounts of Rich Media storage, I ran across several vendors who tout their spec SFS benchmark numbers. Since our usage pattern may not match a particular benchmark, I decided to research the spec SFS benchmark before taking these results into account. One of the big things that I noticed is the file size distribution:

TABLE 2. File size distribution

Percentage

Filesize

33%

1KB

21%

2KB

13%

4KB

10%

8KB

8%

16KB

5%

32KB

4%

64KB

3%

128KB

2%

256KB

1%

1MB

This distribution is nothing like load that my projects will put on a system, and honestly what does this workload match? The file size distribution is much too small for general file serving. I decided that this benchmark is out of date, and not applicable for my environment.

I have used several other benchmarking tools in the past, and for one reason or another they all seemed to be flawed. This led me to email the spec committee to express my thoughts on the subject (like they care). Surprisingly, I received several emails back from spec committee members. There seems to be a consensus that the benchmark is outdated, and not of much use for real world scenarios. Ok…what next? I can’t get a straight answer. From what I can tell the spec committee is made up of vendor representatives that all have a stake in the game. I don’t see a consensus on this matter in the near future.

There are a few benchmarks for NAS systems that show promise. 1. IOZONE (www.iozone.org). Iozone appears to be very configurable and scalable. It is also more complex then several other benchmarking tools, and it does not have a GUI interface.

2. NSPLABS NASBENCH (www.nasbench.com ) NSPLabs is creating what appears to be a great product. One unique feature is a workload recorder. You will be able to record your real workload, and replay it in various scenarios.

For now, I will continue to use IOMETER and I will hopefully ramp up on IOZONE. I am ultimately waiting for the NASBENCH product to come out with a beta soon.

Monday, March 06, 2006

MS to release new portable device codename Origami

Microsoft will be unveling their new portable hardware device on 3/9/06. A marketing video for this device has been circulating around the net for about a week which is generating buzz for the new device.

http://creativecoremedia.com/mso.swf

http://www.gamespot.com/news/6145278.html


Got utilities?

I came across some interesting tools I would like to share

San health utility from Brocade

This utility maps out your SAN in a Visio, documents the SAN configuration, identifies potential SAN design/configuration issues, and provides some basic performance metrics. ..all for the bargain price of...free
http://www.brocade.com/support/sanhealth.jsp


Command line remote desktop/terminal services access

The following 2 windows commands are preinstalled on XP, and can be used to view terminal server sessions on remote servers and reset them.

qwinsta
rwinsta

MS IIS troubleshooting

The following is a life saver when troubleshooting IIS. It provides a mapping of every worker process running and the application pool is serves. This tool is installed with IIS.

IISApp.vbs

I will continue to add tools to this blog as I come across them.

Tuesday, October 04, 2005

Infiniband in the Enterprise?

I recently (6 months ago) began working with a storage product from Isilon Systems, which uses Infiniband for back end communication. Infiniband (IB) was a term I hadn’t heard in 3 years. In 2002, it was said to be the next big thing in the storage and networking world. I don’t believe the demand for 30Gb low latency connectivity was there 3 years ago. With the explosion of digital content and clustered computing, it seems IB has been raised from the dead. Some of the fastest Oracle RAC implementations in the world use IB for cluster interconnect communication. With 4Gb FC emerging in the storage arena and 10Gb Copper in the network arena, does IB have a place? Obviously, Cisco bought Topspin for a reason- Will they kill the technology or push it as a standard in one of these areas? I did some research on the subject, and posted some facts below.

* Cisco purchased TopSpin, one of the largest providers of IB equipment on 04/14/05
* At least 1 Major Server vendor will release IB cards in their blade servers Q4 2005
* IB offers the lowest latency and highest throughput off all the technologies mentioned.
* MAC has released IB support for their hardware
* 10Gb Copper manufacturer Chelsio has released a 10Gb TOE card they claim outperforms IB at a price point of $795
* Embedded computing vendor SBS Techologies imbeds IB into their products.
* All major server vendors have support for IB cards in their servers.

From everything I have read, I believe IB is here to stay. It will be interesting to see how IB offering expands over the next 2 years.