No Comments

Replace costly hardware ADC with NGINX Plus for speedy deployments and eliminating provisioning

Sandeep Khuperkar I CTO & Director, Ashnik
Singapore, 17 Oct 2016
SandeepK

by , , No Comments

17-Oct-2016

NGINX, an open source, high-performance HTTP server, reverse proxy, and IMAP/POP3 proxy server, has gained popularity as a load balancer of choice for enterprise grade deployments. NGINX provides a software-based application delivery platform that load balances HTTP and TCP applications at a fraction of the cost of hardware solutions.

In addition to being open source, scalable, and easier to maintain, the key reason why many organizations require NGINX Plus as load balancer is because it provides a more flexible development environment, which helps organizations adopt a more agile development process. NGINX offers huge performance improvements. NGINX helps organizations not only saves money but also unlocks flexibility that hardware appliances may not be able to provide. NGINX can run anywhere: bare metal in your data center, in public or private clouds, and in containers.

In a typical setup in most organizations, web server and ADC (application delivery components, often hardware) are separate components. But when it comes to web application delivery, NGINX is changing that approach. NGINX has combined these two elements, which provides performance and scalability at both ADC and web layers for web application delivery.

Scenarios where NGINX can be the choice of your Load Balancer:

  • NGINX can be deployed on your hardware of choice, sized, and tuned for specific workloads, and it provides optimization flexibility for workload requirements in any physical, virtual, or cloud environments.
  • NGINX is being used in various scenarios, from handling all the load balancing duties to sitting behind a legacy hardware-based load balancer. This makes it easier for organizations to install a private cloud or move to a hybrid cloud-based solution into their existing environment without needing to rip and replace what they have in their existing setup. This enables simplicity with just one platform to manage, and can be the end result of a migration process that starts with another deployment scenario.
  • NGINX can also load balance new applications in an environment where a legacy hardware appliance in parallel continues to load balance existing applications.
  • Another reason for deploying NGINX is because enterprises want to move to a more modern software-based platform, but does not want to rip and replace all of its legacy hardware load balancers, and can over time migrate the legacy applications from the hardware load balancer to NGINX.
  • NGINX sits behind a legacy hardware-based load balancer. Here clients connect to the hardware-based load balancer, which accepts client requests and load balances them to a pool of NGINX instances and load balance them across the group of actual back-end servers.
  • In a multi-tenant environment where many internal application teams share a device or set of devices, the hardware load balancers are often owned and managed by the network team, but need to be accessed by several teams. Because one team might make configuration changes that negatively impact other teams, an ideal solution is to deploy a set of smaller load balancers, such as NGINX, so that each application team can have its own instance and make changes without impacting other.
  • NGINX allows application teams to complete faster development cycles and, in turn, concentrate on making better features and keeping up with demand. Hardware is not only expensive and time consuming to deploy, it can be rigid and limiting in terms of management and the integrating of new components.
  • Many organizations are moving from dedicated hardware towards cloud-based infrastructures not only for saving money but for also for reasons like getting scalability, speed to market, ease of maintenance, and to get more flexible development environment.

Organizations typically used separate components for the web server and the application delivery controller (ADC) or reverse proxy load balancer, and the load balancing tool was usually a hardware component. With NGINX, organizations are able to change this approach by combining the two into a single software-based tool for web delivery that offers performance and scalability across all layers.

Hardware appliance may put few limitations on load balancing whereas NGINX software load balancer will run as fast as you let it, whenever you want, and in any environment.

Also, Microservices and DevOps are changing the development process by not only allowing applications to be delivered with greater performance but with great speed too.  NGINX Plus can co-exist with your existing infrastructure and over time you can slowly migrate legacy application from hardware appliances to NGINX Plus. Developers benefit from checking-in NGINX Plus configuration as code and integrating it with the application. IT Administrators benefit from using NGINX Plus to load balance Enterprise applications

– Sandeep Khuperkar I CTO & Director, Ashnik


Sandeep is the Director and CTO at Ashnik. He brings more than 21 years of Industry experience (most of it spans across Red Hat & IBM India), with 14+ years in open source and building open source and Linux business model. He is on Advisory Board of JJM College of Engineering (Electronics Dept.)  And visiting lecturer with few of Engineering colleges and works towards enabling them on open source technologies. He is author, Enthusiast and community moderator at Opensource.com. He is also member of Open Source Initiative, Linux Foundation and Open Source Consortium Of India. 


0
0


More from  Sandeep Khuperkar I CTO & Director, Ashnik :
17-Oct-2016
Tags: , , , , , ,