Skip to the main content.
Downloads Try Thriftly
Downloads Try Thriftly
Group 762

Migrate and run DataFlex applications with Oracle, MS SQL Server, PostgreSQL, MySQL &  MariaDB.

flex2Crystal

Stuck in Crystal XI?  Upgrade and use the latest versions of Crystal Reports with DataFlex applications. 

BTR2SQL

Convert from Btrieve / P.SQL / Actian transactional engines to Oracle, MS SQL Server, and PostgreSQL

thriftly-1

Quickly build multi-protocol web services with the same API. Supports JSON-RPC, REST, SOAP,  Thrift, and gRPC.

 Group 671-1

 

Why Mertech?

6 min read

Evolution (now Thriftly.io): Scalability Through Reverse Proxy

Evolution (now Thriftly.io): Scalability Through Reverse Proxy

A reverse proxy server can handle many different roles depending on the size and complexity of your network. In general, a reverse proxy is a server that retrieves resources from other servers. 


A reverse proxy may fill multiple roles:

  1. Load Balancing- Distributing incoming requests to multiple backend servers based on some algorithm.
  2. Caching- A reverse proxy can cache common requests and serve them statically rather than actually passing them on for processing to the other servers.
  3. Security- By having a reverse proxy in front of your application servers, you reduce your security footprint because the application servers can be entirely private to your network with no external access.
  4. SSL Acceleration- A reverse proxy can encrypt and decrypt all requests from and to clients and pass unencrypted traffic to the other servers reducing their load related to encryption.
  5. Failover- If there are multiple application servers, the proxy server can be setup to detect a failure and stop sending traffic to a specific application server.
  6. Application Serving- On a single server a reverse proxy can be used in conjunction with Evolution (now thriftly.io) to provide a complete application stack.

 

Each of these roles has overlapping features. We'll go over them in more detail one by one now. For the purpose of discussion, we'll be referencing terminology and setups that are common in the Apache HTTP Server.

Load Balancing

This is probably the most common use of a reverse proxy. The setup is easy to illustrate:

An illustration of a common reverse proxy setup

In this setup, the reverse proxy might even be referred to as a load balancer. To the outside world there is just a single server to connect to, but this load balancer or proxy server takes each request and forwards it on to an application server on our private network. The means by which the load balancer decides which server should receive the request is referred to as a Load Balancer Scheduler Algorithm in Apache.

There are several algorithms to choose from depending on your specific setup. The default, and most commonly used algorithm is the Request Counting Algorithm. This is a simple algorithm that just makes sure each application server gets an equal number of requests. The problem with this algorithm is it could end up being unfair if your requests come in patterns where there are either various loads associated with the request or some requests are commonly much larger than others.

For these cases we have two more algorithms:
Weighted Traffic Counting Algorithm and Pending Request Counting Algorithm. The former algorithm assigns requests based on the amount of traffic in bytes that has been passed between the application server and the load balancer. The latter assigns requests based on how many queued requests there are for each server giving you some assurance that each application server will not get buried with long running requests.

When configuring a load balancer, you can do quite a few things to adjust how these algorithms operate. For instance you can specify that a specific backend server should handle twice the load as the other servers (maybe one application server is newer and has more processing power). This is referred to as a Load Factor.

Another important concept in load balancing is "stickyness". Stickyness involves attempting to route requests from a specific client to the same application server that initially handled the request. This can give your application pool a big boost in speed since it's quite possible that the application server has locally cached data related to this first request. But the means with which stickyness is implemented depends on your application. Apache supports two stickyness implementations using URL encoding or cookies. Both of these methods may require some work on your part to implement in your application, but they are much more accurate than the IP client mapping method implemented in other load balancers. More information on stickyness is available here.

In general the defaults provided by Apache for load balancing will cover most situations. Most of the settings described here and in the Apache documentation can be used to optimize your applications performance. You can certainly start with a default configuration though, and tweak from there as needed. Mertech provides consulting on optimizing your reverse proxy setup for your specific equipment and application stack.

Caching

Apache's reverse proxy module can be used alongside its caching module. This allows for common requests that meet certain criteria to be cached rather than proxied to the application server. This isn't something that will normally be utilized with Mertech's Evolution (now thriftly.io) Server because caching is in general limited to GET requests to the server and Evolution (now thriftly.io)'s JSON-RPC implementation is based on all requests being sent as POSTs. Therefore further discussion of this topic is outside the scope of this article. But you can find more information on using Apache's caching module here.

Security

One of the primary uses of a reverse proxy is to protect application servers from outside attacks. A reverse proxy can be put into a DMZ with only a single port open to allow access to each of the application servers which are inside the private network:

Reverse proxy as security measure

This setup provides security in multiple ways. In most private networks you will have database servers or other resources that the application servers need access to. By keeping the application servers within the private network, you can give them unfettered access to these resources. If the application server were directly exposed onto the internet you would need to have it inside of the DMZ itself and possibly have to open many ports to various services within your private network. This setup deals with all of that by proxying the connection from the DMZ into your private network. Also, because a reverse proxy server is oftentimes a very simple server, it rarely needs access to other resources, reducing your security footprint.

SSL Acceleration

Using a reverse proxy as an SSL accelerator/terminator is a very common practice in conjunction with any of the other uses we're outlining. In this setup, your application servers behind the proxy server know nothing about any encryption. They talk to the proxy server in unencrypted HTML. The proxy server handles the decryption of requests from clients and encryption of the replies from the application servers. This does multiple things for your overall setup:

  1. Lowers the number of servers that need to have the SSL certificate installed and maintained.

  2. Allows for custom SSL acceleration hardware to be installed in just one place, boosting performance of the entire application stack (versus software based SSL on each application server).

  3. Reduces the likelihood of an incorrectly setup application server leaking unencrypted data.

  4. Lowers the number of servers that need to have security patches for SSL applied (this has happened more often in recent months).


If your application is hosted in the cloud, there are various vendors offering custom software-based SSL termination on Amazon. Linode offers a service called NodeBalancer that is basically a Load-Balancer-As-A-Service that includes SSL termination. These are not quite SSL accelerators, but with current computing power available on VPSs, they often will meet the needs of even very large sites with ease.

Failover

If we go back to our first illustration, along with providing load balancing, the Apache reverse proxy module also provides for various failover methods. For instance, you can set a timeout that will allow you to take down an application server for maintenance and when it's brought back up, the proxy server will detect that it's available and add it back into the pool of available servers. You can also configure a server as a hot standby that will only be used if all other application servers are down. This could be used if you have a live test server that CAN serve requests but is in general used for testing. If this server is setup as a hot standby, it will only get used if all other servers in the pool go down. There are parameters for controlling the number of failed requests before a server is considered "down". These parameters and many others have very sane defaults that will probably work for most applications. Mertech provides consulting on customizing these settings for your specific environment and failover needs.

Application Serving

In smaller installations, this will be the most common setup. Although you lose many of the advantages provided by a reverse proxy (such as load balancing, security via isolation, and failover), it does allow a simple place to do SSL termination. In this configuration, rather than proxying traffic to remote machines on a network, the traffic will simply be proxied to a different listening port on the same machines localhost loopback interface:

Illustration of All-In-One Reverse Proxy & Application Server

You can easily have multiple running applications all exposed via one HTTP interface, but listening on various internal ports by simply configuring the proxy server to forward various URL paths onto different listeners. This is the default configuration that is setup for an Evolution (now thriftly.io) development environment out of the box in fact. This configuration is efficient, malleable, and easily expandable to a multi-machine setup. If we compare this setup to an all-in-one ASP.NET or ISAPI based web server like the DataFlex Web Application Server, it's easy to see the simple mechanism by which an application can be expanded.

For instance, if one of the applications being served in the example above begins taxing the server too much, that Evolution (now thriftly.io) Application Server can just be moved by itself onto another machine on the network and the proxy server configuration updated to point to this new IP address instead of localhost. That's it! Nothing else to do. If the other application server also continues to grow, it can be moved onto it's own server and all of a sudden we are looking at something very similar to our first load balancing example, with the exception of the proxy server just proxying requests to the two separate applications rather than balancing the load between them:

Illustration of Reverse Proxy Growth

Summary

A reverse proxy server can handle a multitude of tasks that are common to most web based applications. In addition, they can make the growth of your application less painful and more predictable not only from a cost standpoint, but also from a technical standpoint. Although our focus was on using the Apache HTTP server alongside Evolution (now thriftly.io), the same concepts can apply to other proxy servers such as nginx or even the load balancing/proxy services provided by cloud vendors such as Amazon, Linode, and Digital Ocean among others. If you would like to have Mertech provide a consultation for optimizing or configuring your web application, please contact us at sales@mertechdata.com or fill out our online inquiry form.


Hybrid Cloud Migration: Plan, Process and Advantages

Hybrid Cloud Migration: Plan, Process and Advantages

This post was co-authored with Riaz Merchant, President/CEO at Mertech Data Systems, Inc.

Read More
Financial Benefits of Cloud Migration & Hybrid Cloud Applications

Financial Benefits of Cloud Migration & Hybrid Cloud Applications

Shifting from your traditional legacy systems to the Cloud can be a game changer, as the benefits of cloud migration are numerous. Cloud computing...

Read More

AWS Application Modernization: Best Practices & Tools

In the age of digital transformation, businesses are increasingly moving away from traditional on-premises systems, steering towards more dynamic,...

Read More