Wednesday, August 15, 2007
Marathon vs Neverfail
Just because I reference some good notes from a Marathon blog doesn’t mean I love the Marathon product per se.
I still think the Neverfail Group offers a better solution for most, if not all of my customers for High Availability and Replication.
Here are some of my Marathon Gripes I have with Marathon, Neverfail doesn’t have these same problems, wait for another Neverfail blog post:
-Requires a complete manual build of the secondary server as well as the virtual server
-Requires a migration from existing systems to the virtual server
-Only supports Windows 2003 SP1, 32-bit versions
-everRun FT requires identical processors between servers, and only supports 1-2 processor configs
-Does not support virtualization (VMware or MS Virtual Server), meaning they can’t support many-to-one
-Does not Eliminate single points of failure from the software perspective
Their virtual server is a single point of failure, any problems with it will result in loss of availability)
This also means that any maintenance (patches, updates) to the virtual server will result in downtime
So Marathon is very cool and I like the idea of zero downtime, but in reality it isn’t a great overall solution for most of my business customers. However, I am sure there are some business customers that it would work great for. If Marathon and Neverfail would somehow merge their products together, now that would be powerful.
I think Marathon is a decent MS Cluster alternative for the LAN but for Site to Site failover, there are some big hang-ups.
To protect against disasters with Marathon, Splitsite is an additional $10,000, and requires less than 10ms round-trip latency between sites, Also a 3rd server is required to act as a witness/quorum
Reader beware, some of my data could be inaccurate but this is my take.
Everyone is talking about it as if everyone can provide it, one way or another. Once you dig just slightly below the surface it becomes apparent that there are nearly as many definitions for availability as there are vendors touting it. Some consider availability of the data, while others availability of the server or storage subsystem.
At its core availability is defined as "present and ready for use; at hand; accessible". The level of availability depends on an organization's service level. Once the business needs for availability are understood, appropriate solutions can be researched and identified. Check out this white paper to "Breaking Through the Noise of Application Availability."
"a technology that lets you increase the availability of a server, service or application so it does not become a single point of failure."
This description is completely true. We find a simplified definition makes understanding clustering more comprehensible. Clustering strategies are typically used for scaling out performance, load balancing, and recovery. The way we see it, clustering is connecting at least two servers together with one acting as a standby for protection. Clustering solutions are rules based and require custom coding and scripting that define the failover and recovery policy and procedures unique to their environment.
Continuous availability virtually guarantees a computing system is operational in the event of any disruption. The concerns for continuous availability focus on two things, the recovery of applications, data and data transactions prior to the moment of disruption, and 24×7 system availability regardless of the planned or unplanned downtime event.
This is the term that has caused the most confusion within the market, yet provides the lowest level of availability and requires a fairly heavy implementation process. Data replication can be more accurately described as a data storage and backup strategy that involves moving data from one server to another server using an asynchronous model to allow for unlimited distances between servers.
Disaster recovery is a plan which enables the protection and restoration of critical information in the event of disruption. Disaster recovery management includes functions such as identifying the critical and vital information, determining recovery needs, developing backup solutions and implementing the backup/recovery solution.
Fault-tolerant architecture allows a system to continue working even when part of the system fails. Fault-tolerant servers provide continuous availability through hardware failures by utilizing and operating redundant components. Mark McCarthy posted this definition on Tech Target, which we feel is a great simplified definition. He states:
"Fault-tolerant describes a computer system or component designed so that, in the event that a component fails, a backup component or procedure can immediately take its place with no loss of service."
High Availability (HA)
"In case downtimes are not affordable at all we have to approach high availability configurations, where cluster nodes share and balance traffic load, or less expensive hot-standby configurations, where one or more secondary node are ready to take over if the primary has a failure."
To better understand the concept of HA, and see how HA software works, visit this link and watch a video demo of high availability, or what we like to call infinite availability in action.