Quantcast
Channel: Load Balancer - Load Balancing for Applications and Technologies
Viewing all 188 articles
Browse latest View live

Load Balancer settings when migrating from Exchange Server 2013 to Exchange Server 2016

$
0
0

When performing this migration both versions of Exchange Server will need to coexist on the network. To do this Microsoft Exchange Server 2013 must be running Cumulative Update 10 (CU10) or later. If you have not updated your Exchange Server 2013 infrastructure to CU 10 then you must do this before introducing Exchange Server 2016 if you want them to work together.

The load balancer setup plays a vital role in client connections to Exchange Server. It helps establish the connection from the client to the Exchange Client Access Server (CAS) servers, which run the client connection service, and distributes the client connection load across the multiple backend Exchange servers. Load balancers have various mechanism to check the health and availability of target CAS servers and establishes the connection to the most suitable one at any given time.

Configuring the Load Balancer when Migrating from Exchange 2013 to Exchange 2016

When organizations are migrating from Exchange 2013 to Exchange 2016, it may take some time to complete the project. As a result, it is essential to configure the previous and new Exchange systems to coexist. A correctly configured load balancer setup can make this possible.

There are various protocols that clients use to connect to the Exchange server. They are:

  1. Outlook Web App (OWA)
  2. Autodiscover
  3. Exchange Web services (EWS)
  4. Exchange Active Sync(EAS)
  5. Offline Address book (OAB)
  6. Outlook Anywhere
  7. MAPI
  8. SMTP Protocol
 

The table below shows the Exchange Server 2013 Internal and External URL configuration that will be used on the 2013 CAS Servers.

  

Figure 1 shows a typical Exchange 2013 deployment with a load balancer.


Figure 1. Current Exchange 2013 organizations before introduction Exchange 2016

 

Exchange Organizations after introducing Exchange 2016

As servers running Exchange 2016 Server are introduced into an organization the same CAS URLs and name space from Exchange 2013 can be used. This configuration helps the coexistence between Exchange Server 2013 and Exchange Server 2016 as it uses the same URL Mail.domain.com for all the client connections.

Figure 2 shows a typical deployment model for Exchange Server 2013 and Exchange Server 2016 coexistence. The Exchange Server 2016 servers need configured to work with the URL’s Mail.domain.com and Autodiscover.domain.com that are in use on Exchange Server 2013. This allows all the clients to connect to Exchange Server 2016, and if the user mailbox is on Exchange 2016, then the CAS service directly proxies the connection to the user’s mailbox. If the target mailbox is on an Exchange Server 2013 host then Exchange Server 2016 CAS service redirects the request to the Exchange Server 2013 mailbox server.

If the Exchange Server CAS connection is managed using the load balancer then no DNS changes are required as the load balancer can be configured to redirect all the new connections to Exchange 2016 server. Load balancers can also be configured to route SMTP traffic to Exchange Server 2016 based on the mailbox server location.

Figure 2. Exchange 2016 and Exchange 2013 Co-existance.

 

The points below outline how various protocols are proxied from Exchange 2013 to Exchange 2016:

Autodiscover

Outlook -> Exchange 2016 -> Authenticates -> Determines the mailbox location -> Retrieves the configuration details of Mailbox Exchange 2013 or Exchange 2016 depending on the target mailbox location.

Outlook Anywhere

Outlook -> Exchange 2016 -> Authenticates -> Determines mailbox location -> Proxies Connection -> Exchange 2013/2016 Mailbox.

Offline address book

Outlook -> Exchange 2016 -> Authenticates -> Determines the mailbox location -> Retrieves the configuration details of Mailbox Exchange 2013 or Exchange 2016 depending on the target mailbox location -> share to the connection URL to the client.

OWA Protocol

OWA Client -> Exchange 2016 -> authenticates -> Proxies/redirect Connection -> Exchange 2013/2016 Mailbox.

Active Sync Protocol

Active Sync Client -> Exchange 2016 -> Authenticates -> Determines mailbox location -> Proxies Connection -> Exchange 2013/2016 Mailbox.

Exchange Web Service

Exchange Web Service Client -> Exchange 2016 -> Authenticates -> Determines mailbox location -> Proxies Connection -> Exchange 2013/2016 Mailbox.


Load Balancer settings when migrating from Exchange Server 2010 to Exchange Server 2013

$
0
0

The load balancer setup plays a vital role in client connection to Exchange Server. It helps establish the connection from the client to the Exchange Client Access Server (CAS) servers, which runs the client connection service, and distributes the client connection load across the multiple backend Exchange servers. Load balancers have various mechanism to check the health and availability of target CAS servers and establishes the connection to the most suitable one at any given time.

Configuring the Load Balancer when migrating from Exchange 2010 to Exchange 2013

When organizations are migrating from Exchange 2010 to Exchange 2013, it may take some time to complete the project. As a result, it is essential to configure the previous and new Exchange systems to coexist. A correctly configured load balancer setup can make this possible.

There are various protocols that clients use to connect to the Exchange server. They are:

  1. Outlook Web App (OWA)
  2. Autodiscover
  3. Exchange Web services (EWS)
  4. Exchange Active Sync(EAS)
  5. Offline Address book (OAB)
  6. Outlook Anywhere
  7. MAPI
  8. SMTP Protocol

Figure 1 shows a typical Exchange 2010 deployment with a load balancer.

Figure 1. Current Exchange 2010 organizations before introduction Exchange 2013

The table below shows the Exchange 2010 Internal and External URL configuration that will be used on the Exchange 2010 CAS Servers.

Exchange Organizations after introducing Exchange 2013 Server

As servers running Exchange 2013 Server are introduced into an organization the same CAS URLs and name space from Exchange 2010 can be used, with one addition for Outlook Anywhere as discussed below. Additionally, the same security certificates used in Exchange 2010 server can be used on Exchange 2013 Server as well. The Table below includes the additional Outlook Anywhere URL.

Outlook Anywhere is the default protocol for client connection on Exchange 2013. To allow co-existence with Exchange 2010 it should also be enabled to use Outlook Anywhere. All the Outlook clients should be using Outlook Anywhere as the default connection. The PowerShell script below can be used to enable Outlook Anywhere on Exchange 2010 Server:

Get-ExchangeServer | Where {($_.AdminDisplayVersion -Like “Version 14*”) -And ($_.ServerRole -Like “*ClientAccess*”)} | Get-ClientAccessServer | Where {$_.OutlookAnywhereEnabled -Eq $False} | Enable-OutlookAnywhere -ClientAuthenticationMethod Basic -SSLOffloading $False -ExternalHostName mail.domain.com -IISAuthenticationMethods NTLM, Basic

Figure 2 shows a typical deployment model for Exchange 2013 and Exchange 2010 co-existence. With Outlook Anywhere configuration in place the load balancers need configured to work with Exchange 2013 Server to use the Mail.domain.com and Autodiscover.domain.com URLs shown in the table. This allows the clients to connect and if the target mailbox is on an Exchange 2013 Server it directly connects to the user’s mailbox. If the target mailbox is on a legacy Exchange 2010 Server then the Exchange 2013 CAS server proxies the connection to the Exchange 2010 CAS server, and this sets up the connection to the mailbox. The proxying of this connection is entirely transparent to the users.

DNS changes will also be required on the load balancer that is handling requests to the CAS servers. This is necessary to ensure that the correct requests are sent to the right Exchange 2013 and Exchange 2010 servers during the co- existence. The same will be true for load balancers directing SMTP requests if in use. The DNS changes required will be unique to each Exchange organization.

Figure 2. Exchange 2013 and Exchange 2010 Co-existence.

The points below outline how various protocols are proxied from Exchange 2013 to Exchange 2010:

OWA Protocol

OWA Client -> Exchange 2013 CAS - > Proxies Connection -> Exchange 2010 CAS -> Exchange 2010 mailbox

Active Sync Protocol

Active Sync Client -> Exchange 2013 CAS - > Proxies Connection -> Exchange 2010 CAS -> Exchange 2010 mailbox

Exchange Web Service

Active Sync Client -> Exchange 2013 CAS - > Proxies Connection -> Exchange 2010 CAS -> Exchange 2010 mailbox

Outlook Anywhere

Active Sync Client -> Exchange 2013 CAS - > Proxies Connection -> Exchange 2010 CAS -> Exchange 2010 mailbox

Hopefully this article series will help you to understand how a load balancer plays a role in configuring coexistence between Exchange 2010, 2013, and 2016. The next two articles will cover coexistence when migrating from Exchange Server 2010 to Exchange Server 2016, and from Exchange Server 2013 to Exchange server 2016.

Microsoft Exchange Servers in Amazon Web Services (AWS) - A Use Case

$
0
0

Amazon Web Services (AWS) is Amazon’s Cloud infrastructure platform. It delivers similar tools and services to those available from Microsoft Azure. One of the central services offered in AWS is Amazon EC2; this is Amazon’s method of providing scalable virtual servers on demand from an Elastic Cloud, hence the name EC2. It’s simple and cost effective to deploy however many server instances required via the EC2 web interface. These servers can run any of the popular operating systems in use today.

When Microsoft Windows Server is deployed as the operating system on EC2 virtual machines, then any of the Microsoft business server applications can be installed, including Microsoft Exchange Server 2007 through to the latest 2016 version.

Key Features of Deploying Exchange on AWS

Deploying Windows Server and Exchange Server on EC2 based infrastructure provides all the benefits associated with Cloud delivery, including the ability to quickly scale the compute and storage capacity up and down as needs change. Here are some of the key features and benefits that come along with AWS EC2 deployment:

  1. No upfront capital outlay for initial or future servers.
  2. Easy to start implementation with just the server instances required, and then scale up quickly if capacity needs to be enhanced.
  3. Easy, automated deployment of virtual machines via a simple Web-based interface and template-based workflows.
  4. Provides full load balancing with KEMP LoadMaster for AWS to provide high availability and Site Resilience. LoadMaster can work in conjunction with and enhance the following AWS features:
    1. High Availability with the AWS Availability Zone functionality.
    2. Site Resiliency across the zone within the AWS region
  5. Exchange High Security and compliance features are fully available when deploying on EC2.
  6. The Microsoft Preferred Architecture for Exchange Server deployments can be fully realised when deploying on EC2 hosted virtual machines.
  7. Complete control of the Exchange servers and environment.

How the deployment works

Deploying Microsoft Exchange Server to AWS EC2 is done with the same planning and design stages as an on-premises deployment. A summary of this process is given below:

  1. Collect all the requirements and users profile information for the deployment.
  2. Use Exchange Role Requirement Calculator to input all the design requirement and user profile information.
  3. Determine the CPU specifications needed from requirements calculator and design the servers needed with High Availability and Site Resiliency in mind.
  4. Based on design, deploy virtual servers in a Zone within Amazon EC2.
  5. Install and configure Exchange Server on the virtual machines and configure High Availability using DAG settings.
  6. Deploy a KEMP LoadMaster Load balancer in First zone and configure High Availability for Exchange services.
  7. Deploy Virtual Servers in a second Zone in the same AWS region to provide Site resilience.
  8. Install and Configure Exchange servers on virtual machines in the second Zone and configure site resiliency using DAG settings.
  9. Deploy a KEMP LoadMaster Load balancer in the second zone and configure High Availability for Exchange services.

The diagram below shows the final design layout for a two Zone Exchange Server deployment on EC2.

AWS is an ideal replacement for Exchange Server on premise deployments. It provides the server, storage, and site infrastructure needed to implement Microsoft’s preferred architecture on Exchange database resilience. AWS also delivers all the other resources required for successful deployment. KEMP LoadMaster for AWS integrates seamlessly into AWS EC2 deployments, giving you the best Cloud infrastructure from Amazon and the best Application Delivery Controller and load balancer from KEMP Technologies.

AWS Classic Elastic Load Balancer Features Guide

$
0
0

While the AWS Cloud provides many additional components and services beyond what is offered by Elastic Load Balancer(ELB), Kemp’s Virtual LoadMaster VLM for AWS has additional and enhanced features and capabilities that provide a rich set of integrated functionality, easily configured and managed via the Web User Interface. The same set of capabilities are available when LoadMaster, as an appliance or virtual machine, is used on premises. Having a common interface across all environments is a big advantage and provides a simpler, consistent management experience when deploying hybrid or heterogeneous cloud environments.

FeaturesElastic Load Balancer (ELB)Virtual LoadMaster (VLM)
Layer 4 Load Balancing  
Layer 7 Load Balancing  
Pre-configured application templates  
High Availability  
Clustering  
Scheduling MethodsRound Robin OnlyAdvanced Scheduling Methods
Server PersistenceL4 OnlyL4/L7 (Advanced options)
SSL Termination/Offload  
Content Caching/Compression  
VM Resource Availability Awareness  
Header Content Switching  
Header Manipulation  
Health Check Aggregation  
TCP multiplexing  
Reverse Proxy  
Advanced Authentication * 
(Advanced Options)
Web Application Firewall Protection  

* Supported by other services with AWS

AWS GovCloud (US)

$
0
0

Many federal agencies have adopted Kemp Technologies products including the US Department of Defense (DoD), US National Security Agencies, US Federal Civilian Agencies, and US Federal Healthcare Agencies. Kemp offers these AWS Govcloud(US) clients, the features listed below:

 

Kemp’s Virtual LoadMaster(VLM) for the cloud is a full-featured, advanced Layer 4-7 load balancing, content management engine capable of performing advanced application delivery functions such as Multi-protocol support, Clustering, SSL-Offload & re-encryption, Content Caching & Compression with advanced authentication options, among others. Available in the AWS GovCloud (US), it offers a rich set of features, resulting in an effortless transition of applications from on-premises data centers to the cloud.

Virtual LoadMaster is available in the AWS GovCloud(US) with Free and BYOL licensing options. The Free version delivers 20Mbps throughput and is used in both development and production environments, simplifying DevOps-oriented delivery methodologies. Customers may optionally upgrade this license to one with greater throughput with the purchase of a perpetual license. Kemp’s VLM for BYOL provides a wide range of throughput options ranging from 200Mbps to 10Gbps with varying ranges of SSL TPS, option to include Web Application Firewall and with an option to obtain GSLB multi-site Load Balancing.

Kemp cloud experts are available to help with integration and migration needs at all stages of a project, from inception to production.

Load Balancing NGINX

$
0
0

NGINX is a high performance webserver designed to handle thousands of simultaneous requests and has become one of the most deployed web server platforms on the Internet. Kemp LoadMaster can bring resilience and scalability to your NGINIX environment with an easily deployable load balancer that can service millions of active connections in a highly available configuration.

Fig 1. LoadMaster load balancing topology for NGNIX

LoadMaster Features

  • Deploy on-premises or in cloud (Azure and AWS)
  • Layer 4 and Layer 7 load balancing
  • NGNIX health checking
  • NGINX template to simplify and speed setup
  • High performance reverse proxy
  • SSL offload for NGINX
  • Content switching
  • 24x7 Support
  • Support mix of NGNIX and other web servers (Apache, IIS)

Getting your Load Balancer for NGINX

LoadMaster is available as a 30 day trial or if you have traffic requirements of less than 20Mbit/s then you can have a LoadMaster for free. The trials are delivered as pre-built appliances for the major hypervisor platforms or if you wish, you can select the trial and free versions from the Azure and Amazon Web Services (AWS) marketplaces.

Configuring Load Balancing for NGINX

The LoadMaster documentation set provides guidelines on how to deploy and configure a LoadMaster appliance to load balance application workloads on NGINX and how to configure advanced features such as single sign-on and reverse proxy for NGINX.

Load Balancing Features for NGINX

SSL Offload– LoadMaster can offload the SSL processing workload from the NGINX servers and also provide a single point of administration for SSL certificates and security.

DDOS Protection– LoadMaster includes a snort compatible engine to offer DDOS protection for NGNIX servers

Authentication– The Edge Security Pack in LoadMaster provides comprehensive authentication and single sign-on services for NGNIX

Reverse Proxy– LoadMaster can act as a reverse proxy for NGNIX environments

Caching and Compression– LoadMaster uses caching and compression as a way to improve NGNIX performance

SSL Redirect– Redirection of all non-HTTPS requests to HTTPS

Intelligent Session Persistence– Multiple options available to ensure clients are load balanced to the same server for the session lifetime

Web Application Firewall (WAF)– The LoadMaster WAF for NGNIX provides application level protection from common and day-zero vulnerabilities

Global Load Balancing (GSLB)– Load balance NGNIX across multiple physical locations including cloud to provide disaster recovery failover and geo-aware traffic distribution.

Load Balancing Citrix Virtual Apps and Desktops (StoreFront)

$
0
0

Maximum reliability and availability for Citrix Virtual Apps and Desktops

Ensure the best user application experience [AX] on Citrix StoreFront with the performance, simplicity and resilience of Kemp LoadMaster while avoiding the cost and complexity of Citrix Gateway (NetScaler).

Increased Simplicity

Drop-in replacement for Citrix Gateway (NetScaler) in Citrix StoreFront environments

Better Value

Significant TCO savings of up to 85% compared to Citrix Gateway (NetScaler) and F5

World-Class Support

Proven and supported by Kemp’s exceptional customer service team

The better Load Balancer choice

Kemp LoadMaster is a drop-in load balancer replacement for Citrix ADC (NetScaler) that incudes pre-defined templates for common Citrix Virtual Apps and Desktops environments to greatly simplify deployment and ensure optimal security and performance. LoadMaster offers significant TCO savings compared to Citrix ADC and is supported by a team that regularly achieves 99% customer satisfaction rating.

Comparison of VLM-3000, VPX-3000 and F5-VE-3G over 3 years shows a saving of over 85%

Significant TCO Savings

Kemp LoadMaster provides all the features required at a significantly lower total cost of ownership. In some cases, it is even more cost effective to purchase a new LoadMaster than to renew Citrix ADC (NetScaler) support.

Simple Load Balancing of StoreFront Servers

In a Citrix Virtual Apps and Desktops environment, the Kemp LoadMaster sits at the edge (behind a firewall) and accepts connections from remote clients, load balancing connections across the available StoreFront servers. The LoadMaster manages the authentication to the external authentication systems such as Active Directory or RADIUS. When Storefront returns the ICA file to the client, LoadMaster intercepts and modifies the information with the appropriate load balanced VDI server information.

Try it Now

Load balancing for Citrix Virtual Apps and Desktops is available on all LoadMaster models and is easily configured using our optimized template.

Download the template and deployment guide to get started

Microsoft Azure Government

$
0
0

Many federal agencies have adopted Kemp Technologies products including the US Department of Defense (DoD), US National Security Agencies, US Federal Civilian Agencies, and US Federal Healthcare Agencies.  Kemp offers Azure Government clients the features listed below:

 

Kemp’s Virtual LoadMaster (VLM) for the cloud is a full-featured, advanced Layer 4-7 load balancing, content management engine capable of performing advanced application delivery functions such as Multi-protocol support, Clustering, SSL-Offload & re-encryption, Content Caching & Compression with advanced authentication options, among others.  Available in the Azure Government, it offers a rich set of features, resulting in an effortless transition of applications from on-premises data centers to the cloud.

Virtual LoadMaster is available in the Azure Government with Free and BYOL licensing options.  The Free version delivers 20Mbps throughput and is used in both development and production environments, simplifying DevOps-oriented delivery methodologies.  Customers may optionally upgrade this license to one with greater throughput with the purchase of a perpetual license.  Kemp’s VLM for BYOL provides a wide range of throughput options ranging from 200Mbps to 10Gbps with varying ranges of SSL TPS, option to include Web Application Firewall and with an option to obtain GSLB multi-site Load Balancing.

Kemp cloud experts are available to help with integration and migration needs at all stages of a project, from inception to production.


Viewing all 188 articles
Browse latest View live