原文出处:https://www.nginx.com/blog/10-tips-for-10x-application-performance/msr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io
-
通过反向向代理服务器加速和保护web应用
如果你的应用在单个机器上运行, 解决性能问题的方案显而易见: 仅仅是需要运算速度更快的机器, 意味着更多的处理器,更多的内存,更快的磁盘阵列等等. 新的机器可以运行你的 WordPress server, Node.js 应用, Java应用等,会比以往的更快. (如果你的应用需要访问数据库,解决方案一样: 2台高性能的机器, 两台机器间更快的 络连接。
问题是,机器的性能可能不是性能问题之所在。 通常web应用运行慢的原因是计算机需要在不同类型的任务间来回切换: 处理数千个用户连接,访问磁盘上的文件, 运行应用程序代码等. 应用程序服务器可能出现抖动 – 内存不够, 进行swap, 因一个任务的磁盘IO而使多个请求等待。
不通过硬件升级, 你可以采用一个完全不同的方案:添加一个反向代理服务器来分发这些任务。一 个反向代理服务器位于应用程序之前来处理 络连接。只有返回代理服务直接和互联 相连。反向 代理服务器和应用服务器通过内 高速连接。
通过使用反向代理服务器,使得应用程序服务器不需要处理用户的连接,从而使应用服务器专注于生成反向代理服务器需要通过 络发送给用户的页面。应用服务器不用在等待客户端的响应,可以高速运行。
添加一个反向代理服务器同时可以使你的web 服务器设施更灵活,具有扩展性。例如,一个web服务器过载了,另一个同样类型的web服务器很容易被添加进来分担负载; 一个web服务器down机了,它就会被移除出集群。
因为它提供的灵活性,一个反向代理服务器同样是其它一个性能相关设施的先决条件,例如:
Load balancing – 一个负载均衡器在反向代理服务器上运行,来分发请求到后端的应用服务器。 通过负载均衡器,你可以不用修改程序就可以添加应用服务器。
Caching static files – 文件都是被直接请求的, 像图片文件,代码文件, 可以存储在反向代理服务器上,直接发送给客户端。这样加快了资源的访问速度并且减轻了应用服务器的压力,同时使得应用服务器运行的更快。
Securing your site –反向代理服务器可以配置为高安全性并监视,快速识别和响应***,保护应用程序服务器。
NGINX 这款软件被设计用来作为一个反向代理服务器来使用。 NGINX 采用事件驱动来处理 络连接比通常的web服务器性能要搞。 NGINX 额外添加了更为高级的反向代理特性, 像应用健康检查 , 特殊请求路由, 高级缓存功能,等等。
Tip #2: Add a Load Balancer
Adding a load balancer is a relatively easy change which can create a dramatic improvement in the performance and security of your site. Instead of making a core web server bigger and more powerful, you use a load balancer to distribute traffic across a number of servers. Even if an application is poorly written, or has problems with scaling, a load balancer can improve the user experience without any other changes.
A load balancer is, first, a reverse proxy server (see Tip #1) – it receives Internet traffic and forwards requests to another server. The trick is that the load balancer supports two or more application servers, using a choice of algorithms to split requests between servers. The simplest load balancing approach is round robin, with each new request sent to the next server on the list. Other methods include sending requests to the server with the fewest active connections. NGINX Plus has capabilities for continuing a given user session on the same server, which is called session persistence.
Load balancers can lead to strong improvements in performance because they prevent one server from being overloaded while other servers wait for traffic. They also make it easy to expand your web server capacity, as you can add relatively low-cost servers and be sure they’ll be put to full use.
Protocols that can be load balanced include HTTP, HTTPS, SPDY, HTTP/2, WebSocket, FastCGI, SCGI, uwsgi, memcached, and several other application types, including TCP-based applications and other Layer 4 protocols. Analyze your web applications to determine which you use and where performance is lagging.
The same server or servers used for load balancing can also handle several other tasks, such as SSL termination, support for HTTP/1/x and HTTP/2 use by clients, and caching for static files.
NGINX is often used for load balancing; to learn more, please see our overview blog post, configuration blog post, ebook and associated webinar, and documentation. Our commercial version, NGINX Plus, supports more specialized load balancing features such as load routing based on server response time and the ability to load balance on Microsoft’s NTLM protocol.
Tip #3: Cache Static and Dynamic Content
Caching improves web application performance by delivering content to clients faster. Caching can involve several strategies: preprocessing content for fast delivery when needed, storing content on faster devices, storing content closer to the client, or a combination.
There are two different types of caching to consider:
-
Caching of static content. Infrequently changing files, such as p_w_picpath files (JPEG, PNG) and code files (CSS, JavaScript), can be stored on an edge server for fast retrieval from memory or disk.
-
Caching of dynamic content. Many Web applications generate fresh HTML for each page request. By briefly caching one copy of the generated HTML for a brief period of time, you can dramatically reduce the total number of pages that have to be generated while still delivering content that’s fresh enough to meet your requirements.
If a page gets ten views per second, for instance, and you cache it for one second, 90% of requests for the page will come from the cache. If you separately cache static content, even the freshly generated versions of the page might be made up largely of cached content.
There are three main techniques for caching content generated by web applications:
-
Moving content closer to users. Keeping a copy of content closer to the user reduces its transmission time.
-
Moving content to faster machines. Content can be kept on a faster machine for faster retrieval.
-
Moving content off of overused machines. Machines sometimes operate much slower than their benchmark performance on a particular task because they are busy with other tasks. Caching on a different machine improves performance for the cached resources and also for non-cached resources, because the host machine is less overloaded.
Caching for web applications can be implemented from the inside – the web application server – out. First, caching is used for dynamic content, to reduce the load on application servers. Then, caching is used for static content (including temporary copies of what would otherwise be dynamic content), further off-loading application servers. And caching is then moved off of application servers and onto machines that are faster and/or closer to the user, unburdening the application servers, and reducing retrieval and transmission times.
Improved caching can speed up applications tremendously. For many web pages, static data, such as large p_w_picpath files, makes up more than half the content. It might take several seconds to retrieve and transmit such data without caching, but only fractions of a second if the data is cached locally.
As an example of how caching is used in practice, NGINX and NGINX Plus use two directives to set up caching: proxy_cache_path
and proxy_cache
. You specify the cache location and size, the maximum time files are kept in the cache, and other parameters. Using a third (and quite popular) directive,proxy_cache_use_stale
, you can even direct the cache to supply stale content when the server that supplies fresh content is busy or down, giving the client something rather than nothing. From the user’s perspective, this may strongly improves your site or application’s uptime.
NGINX Plus has advanced caching features, including support for cache purging and visualization of cache status on a dashboard for live activity monitoring.
For more information on caching with NGINX, see the reference documentation and NGINX Content Caching in the NGINX Plus Admin Guide.
Note: Caching crosses organizational lines between people who develop applications, people who make capital investment decisions, and people who run networks in real time. Sophisticated caching strategies, like those alluded to here, are a good example of the value of a DevOps perspective, in which application developer, architectural, and operations perspectives are merged to help meet goals for site functionality, response time, security, and business results, )such as completed transactions or sales.
Tip #4: Compress Data
Compression is a huge potential performance accelerator. There are carefully engineered and highly effective compression standards for photos (JPEG and PNG), videos (MPEG-4), and music (MP3), among others. Each of these standards reduces file size by an order of magnitude or more.
Text data – including HTML (which includes plain text and HTML tags), CSS, and code such as JavaScript – is often transmitted uncompressed. Compressing this data can have a disproportionate impact on perceived web application performance, especially for clients with slow or constrained mobile connections.
That’s because text data is often sufficient for a user to interact with a page, where multimedia data may be more supportive or decorative. Smart content compression can reduce the bandwidth requirements of HTML, Javascript, CSS and other text-based content, typically by 30% or more, with a corresponding reduction in load time.
If you use SSL, compression reduces the amount of data that has to be SSL-encoded, which offsets some of the CPU time it takes to compress the data.
Methods for compressing text data vary. For example, see the section on HTTP/2 for a novel text compression scheme, adapted specifically for header data. As another example of text compression you can turn on GZIP compression in NGINX. After you pre-compress text data on your services, you can serve the compressed .gz
version directly using the gzip_static
directive.
Tip #5: Optimize SSL/TLS
The Secure Sockets Layer (SSL) protocol and its successor, the Transport Layer Security (TLS) protocol, are being used on more and more websites. SSL/TLS encrypts the data transported from origin servers to users to help improve site security. Part of what may be influencing this trend is that Google now uses the presence of SSL/TLS as a positive influence on search engine rankings.
Despite rising popularity, the performance hit involved in SSL/TLS is a sticking point for many sites. SSL/TLS slows website performance for two reasons:
-
The initial handshake required to establish encryption keys whenever a new connection is opened. The way that browsers using HTTP/1.x establish multiple connections per server multiplies that hit.
-
Ongoing overhead from encrypting data on the server and decrypting it on the client.
To encourage the use of SSL/TLS, the authors of HTTP/2 and SPDY (described in the next section) designed these protocols so that browsers need just one connection per browser session. This greatly reduces one of the two major sources of SSL overhead. However, even more can be done today to improve the performance of applications delivered over SSL/TLS.
The mechanism for optimizing SSL/TLS varies by web server. As an example, NGINX uses OpenSSL, running on standard commodity hardware, to provide performance similar to dedicated hardware solutions. NGINX SSL performance is well-documented and minimizes the time and CPU penalty from performing SSL/TLS encryption and decryption.
In addition, see this blog post for details on ways to increase SSL/TLS performance. To summarize briefly, the techniques are:
-
Session caching. Uses the
ssl_session_cache
directive to cache the parameters used when securing each new connection with SSL/TLS. -
Session tickets or IDs. These store information about specific SSL/TLS sessions in a ticket or ID so a connection can be reused smoothly, without new handshaking.
-
OCSP stapling. Cuts handshaking time by caching SSL/TLS certificate information.
NGINX and NGINX Plus can be used for SSL/TLS termination – handling encryption and decyption for client traffic, while communicating with other servers in clear text. Use these steps to set up NGINX or NGINX Plus to handle SSL/TLS termination. Also, here are specific steps for NGINX Plus when used with servers that accept TCP connections.
Tip #6: Implement HTTP/2 or SPDY
For sites that already use SSL/TLS, HTTP/2 and SPDY are very likely to improve performance, because the single connection requires just one handshake. For sites that don’t yet use SSL/TLS, HTTP/2 and SPDY makes a move to SSL/TLS (which normally slows performance) a wash from a responsiveness point of view.
Google introduced SPDY in 2012 as a way to achieve faster performance on top of HTTP/1.x. HTTP/2 is the recently approved IETF standard based on SPDY. SPDY is broadly supported, but is soon to be deprecated, replaced by HTTP/2.
The key feature of SPDY and HTTP/2 is the use of a single connection rather than multiple connections. The single connection is multiplexed, so it can carry pieces of multipl
声明:本站部分文章及图片源自用户投稿,如本站任何资料有侵权请您尽早请联系jinwei@zod.com.cn进行处理,非常感谢!