gpt4 book ai didi

javascript - 如何减少第一个字节的时间

转载 作者:行者123 更新时间:2023-11-30 21:22:16 24 4
gpt4 key购买 nike

我正在使用 WordPress 框架并在 name-cheap 的专用服务器上工作,并且只有一个站点在该服务器上运行。即使在那之后我得到的瀑布时间也在 500 毫秒的范围内,但我想让它在 100 毫秒左右。这是我的网站(http://ucbrowserdownload.net/)和瀑布你可以看到从我的 Angular 来看一切都很完美,但仍然没有得到一些解决方案。也可以查看http://labnol.org/这个网站也是在 WordPress 中并使用相同的主题,即使我在我的索引页面上调用非常少的图像或博客,即使我错了一个巨大的瀑布。想知道,如何解决所有这些问题,想知道 WordPress 或主题或主机中的问题出在哪里。完全卡住了,最近几周没有解决方案。非常感谢您的帮助。谢谢。 enter image description here

最佳答案

Original Source

Optimization of Nginx

Optimal Nginx configuration presented in this article. Once again briefly go through the already known parameters and add some new ones that directly affect TTFB.

compounds

First we need to define the number of "workers" Nginx. worker_processes Nginx Each workflow is able to handle many connections and is linked to the physical processor cores. If you know exactly how many cores in your server, you can specify the number yourself, or trust Nginx:

worker_processes auto;
# Determination of the number of working processes

In addition, you must specify the number of connections:

worker_connections 1024;
# Quantification of compounds by one working process, ranging from 1024 to 4096

requests

To the Web server can process the maximum number of requests, it is necessary to use a switched off by default directive multi_accept :

multi_accept on;
# Workflows will accept all connections

It is noteworthy that the function will be useful only if a large number of requests simultaneously. If the request is not so much, it makes sense to optimize work processes, so that they did not work in vain:

accept_mutex on;
# Workflows will take turns Connection

Improving TTFB and server response time depends on the directives tcp_nodelay and tcp_nopush :

on tcp_nodelay; 
tcp_nopush on;
# Activate directives tcp_nodelay and tcp_nopush

If you do not go into too much detail, the two functions allow you to disable certain features of the TCP, which were relevant in the 90s, when the Internet was just gaining momentum, but do not make sense in the modern world. The first directive sends the data as soon as they are available (bypass the Nagle algorithm). The second allows you to send a header response (Web page) and the beginning of the file, waiting for filling the package (ie, includes TCP_CORK ). So the browser can start displaying the web page before.

At first glance, the functions are contradictory. Therefore, the directive tcp_nopush should be used in conjunction with the sendfile . In this case, the packets are filled prior to shipment, as directive is much faster and more optimal than the method of the read + the write . After the package is full, Nginx automatically disables tcp_nopush , and tcp_nodelay causes the socket to send the data. Enable sendfile is very simple:

sendfile on;
# Enable more effective, compared to read + write, file sending method

So the combination of all three Directives reduces the load on the network and speeds the sending of files.

Buffers

Another important optimization affects the size of the buffer - if they are too small, Nginx will often refer to the disks are too big - will quickly fill up the RAM. Nginx Buffers To do this, you need to set up four directives. Client_body_buffer_size and client_header_buffer_size set the buffer size for the body and read the client request header, respectively. Of client_max_body_size sets the maximum size of the client request, and large_client_header_buffers specifies the maximum number and size of buffers to read large request headers.

The optimal buffer settings will look like this:

10K client_body_buffer_size; 
client_header_buffer_size 1k;
of client_max_body_size 8m;
large_client_header_buffers 2 1k;
# 10k buffer size on the body of the request, 1 KB per title, 8MB to the query buffer and 2 to read large headlines

Timeouts and keepalive

Proper configuration of standby time and keepalive can also significantly improve server responsiveness.

Directive client_body_timeout and client_header_timeout set time delay on the body and reading the request header:

client_body_timeout 10; 
client_header_timeout 10;
# Set the waiting time in seconds

In the case of lack of response from the client using reset_timedout_connection you can specify Nginx disable such compounds:

reset_timedout_connection on;
# Disable connections timed-out

Directive keepalive_timeout sets the wait time before the stop connection and keepalive_requests limits the number of keepalive-requests from the same client:

keepalive_timeout 30; 
keepalive_requests 100;
# Set the timeout to 30 and limitations 100 on client requests

Well send_timeout sets the wait time in the transmission response between two write operations:

send_timeout 2;
# Nginx will wait for an answer 2

Caching

Enable caching significantly improve server response time. Nginx cache Methods are laid out in more detail in the material about caching with Nginx, but in this case the inclusion of important cache-control . Nginx is able to send a request to redkoizmenyaemyh caching data, which are often used on the client side. To do this, the server section you want to add a line:

. Location ~ * (jpg | jpeg | png | gif | ico | css | js) $ {expires 365d;}

Targets file formats and duration Cache

Also it does not hurt to cache information about commonly used files:

open_file_cache max = 10000 = 20s the inactive; 
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# Enables the cache tags 10 000 files in 30 seconds

open_file_cache specifies the maximum number of files for which information is stored, and the storage time. open_file_cache_valid sets the time after which you need to check the relevance of the information, open_file_cache_min_uses specifies the minimum number of references to the file on the part of customers and open_file_cache_errors includes caching troubleshooting files.

logging

This is another feature that can significantly reduce the performance of the entire server and, accordingly, the response time and TTFB. So the best solution is to disable basic log and store information about critical errors only:

off the access_log; 
the error_log /var/log/nginx/error.log crit;
# Turn off the main logging

Gzip compression

Usefulness Gzip is difficult to overstate. Compression can significantly reduce traffic and relieve the channel. But he has a downside - need to compress time. So it will have to turn off to improve TTFB and server response time. Gzip At this stage, we can not recommend Gzip off as compression improves the Time To Last Byte, ie, the time required for a full page load. And this is in most cases a more important parameter. On TTFB and improving server response time greatly affect large-scale implementation of HTTP / 2 , which contains a built-in methods for header compression and multiplexing. So that in the future may disable Gzip will not be as prominent as it is now.

PHP Optimization: FastCGI in Nginx

All sites use modern server technology. PHP, for example, which is also important to optimize . Typically, PHP opens a file, verifies and compiles the code, then executes. Such files and processes can be set, so PHP can cache the result for redkoizmenyaemyh files using OPcache module. And Nginx, connected to PHP using FastCGI module can store the result of the PHP script to send the user an instant.

The most important

Optimization of resources and the correct settings for the web server - the main influencing TTFB and server response time factors. Also do not forget about regular software updates to the stable release, which are to optimize and improve performance.

关于javascript - 如何减少第一个字节的时间,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37524175/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com