Application Delivery Infrastructure for Multi-Cloud Enterprises
Since the early days of the internet era, application delivery infrastructure has been critical to ensuring application availability, performance, and security. The first generation of devices was comprised of load balancers with a narrow set of functional capabilities focused on distributing user traffic across multiple servers. As applications became critical to business operations, load balancers evolved into application delivery controllers with an ever-expanding set of functions, from application acceleration and protocol optimization to SSL offload and web application firewalling.
Designing and Deploying Microservices
The rise of microservices has been a remarkable advancement in application development and deployment. With microservices, an application is developed, or refactored, into separate services that “speak” to one another in a well-defined way–via APIs, for instance. Each microservice is self-contained, each maintains its own data store (which has significant implications), and each can be updated independently of others.
NGINX Ingress Controller for Kubernetes
Applications running on Kubernetes need a proven, production-grade application delivery solution. The NGINX Ingress Controller combines the trusted NGINX Open Source and NGINX Plus software load balancers with simplified configuration based on standard Kubernetes Ingress or custom NGINX Ingress resources, to ensure that applications in your Kubernetes cluster are delivered reliably, securely, and at high velocity.
NGINX is an open source web server & focuses on high performance, high concurrency and low memory usage with features like load balancing, caching, access and bandwidth control, and the ability to integrate efficiently with a variety of applications. In this edition of whitepaper, you would learn more on ‘What is NGINX, Why is it used, NGINX Plus, Insights about NGINX Architecture, Worker Process, Overview on NGINX Caching & NGINX Configuration’.
5 Reasons To Switch To Software Load Balancing
This ebook presents five reasons IT executives, network professionals, and application developers should make the move from traditional hardware ADCs to software solutions:
- Dramatically reduce costs without sacrificing features or performance
- Moving to DevOps requires software app delivery
- Deploy everywhere with one ADC solution; on bare metal, cloud, containers, and more
- Adapt quickly to changing demands on your applications
- No artificial or contract-driven constraints on performance
Microservices Reference Architecture
The move to microservices is a seismic shift in web application development and delivery Because we believe moving to microservices is crucial to the success of our customers, we at NGINX have launched a dedicated program to develop NGINX software features and development practices in support of microservices.
Containerizing Continuous Delivery in Java
Java is a programming language with an impressive history. Not many other languages can claim 20-plus years of active use, and the fact that the platform is still evolving to adapt to modern hardware and developer requirements will only add to this longevity.
Docker for Java Developers
This chapter introduces the basic concepts and terminology of Docker. You’ll also learn about different scheduler frameworks. The main benefit of the Java programming language is Write Once Run Anywhere.
DZone Guide to Building and Deploying Applications on the Cloud
The cloud helps you get applications up quickly - but you still need to optimize your infrastructure if you want high-performance applications. NGINX has built the leading cloud-native application delivery software, so we’ve assembled these five tips to help you get the speed you need.
In the past few years, the technology industry has witnessed a rapid change in applied, practical distributed systems architecture that has led industry giants (such as Netflix, Twitter, Amazon, eBay, and Uber) away from building monolithic applications to adopting microservice architecture.
The Complete NGINX Cookbook
This chapter discusses load-balancing configurations for HTTP and TCP in NGINX. In this chapter, you will learn about the NGINX load-balancing algorithms, such as round robin, least connection, least time, IP hash, and generic hash.