How to Fix Your Latency Issue

Speed is one of the most important factors for a website, especially for those of you who have a business. Because, fast loading will also maximize SEO and User Experience. So, in order for your website loading to accelerate, latency or latency is an important thing to pay attention to.

In this article, we’ll help you learn more about what latency is and how to speed up your website by reducing network latency.

We’ll cover what factors affect latency, and how to monitor and reduce it. In addition, we will also explain the causes of high latency, and explain the differences in latency, throughput, and bandwidth.

OK, let’s get started!

3 Factors that Affecting Latency

In this section, we will discuss the things that affect latency, from data transmission distance to hardware and software. In general, things that affect latency are:

Distance

Yup, one of the most significant factors affecting latency is distance. Connection speed depends on the distance between the requesting device and the responding web hosting server. The farther the distance, the longer the latency.

For example, a website hosted in France will respond more quickly to requests from users in Paris compared to users in United States. This is because the distance between France and Paris is closer than the distance between France and United States.

2. Web Page Size

Web page size includes all the files on the page, including images, videos, scripts, text content, code, and stylesheets.

Web pages that embed content from outside party websites or have feature-heavy content such as large, high-resolution images may take longer to load.

3. Software and Hardware Used

Apart from the distance and size of web pages, the hardware and software used when transferring data from one point to another can affect network latency. Examples include transmission media such as fiber optic cables, routers, and Wi-Fi access points.

Some devices such as load balancers, firewalls, and IPS (Intrusion Prevention Systems) can also be one of these factors. This is because each component involved in the network flow must have its own limitations.

For example, fiber optic cables can transmit data over longer distances and with higher bandwidth than metal cables. However, its light and very thin form also makes it more susceptible to damage.

The difference between Latency, Throughput and Bandwidth

Although the three are related, in fact network latency, throughput, and bandwidth are not the same.

Latency is the speed of data packet transfer process from client to server and vice versa.

Throughput is the amount of data that has been successfully transferred over the network within a certain period of time. The calculation is done by considering the latency.

Meanwhile, bandwidth is the capacity of the maximum volume of data that can be transferred over the network at a certain time. The wider the bandwidth, the more data sent on the network.

For fast and efficient data transmission results with high throughput, you need high bandwidth and low network latency.

Low bandwidth and low latency alone are not ideal, because even if data is transmitted almost without delay, the amount of data sent over the network (throughput) may be low.

What Causes High Latency?

Some common causes of high latency are:

  • DNS errors. A web server that is not running properly can slow down the network or even prevent visitors from accessing your website. Examples of web server issues that users may encounter include Error 404 and Error 500.
  • Network device problem. Network devices such as routers and switches that are running out of memory capacity or experiencing high CPU usage can delay the data transfer process.
  • Bad transmission medium. Enterprises must choose the transmission media carefully because the combination of incompatible hardware and software can cause high latency.
  • Have more than one router. Using multiple routers can cause a slow network, because every time a data packet moves from one router to another, the latency will increase. In addition to delays, this can also result in loss of data packets (packet loss).
  • Suboptimal routing plan. In order for the data transfer process to be faster, you need to implement proper dynamic routing. This is a technique of calculating the possible number of routes that data traffic can take to be transferred across a network before choosing the best result.
  • Bad back-end database formatting. Website databases with poor formats and not optimized can result in high latency. Common causes can be from the use of imprecise indexes and complex calculations. Databases that are not optimized for various types of devices can also slow down website performance.
  • Bad environmental conditions. Strong winds, storms, or heavy rain can interfere with the satellite’s wireless signal, affecting your internet connection and causing latency issues.
  • Problems with the user’s device. Apart from network devices, insufficient memory or RAM and high CPU usage on user devices can also cause latency. In addition, if the user’s bandwidth is insufficient and the internet equipment is out of date, of course this can cause a slow internet connection.

How to Monitor Latency

Now, after you know what latency is and what causes it, then we will explain various network monitoring tools that you can use to test and measure network latency.

You can try the following three ways to check latency and network connection:

  • Ping. Packet Inter-Network Groper or Ping is a command to verify the validity of a particular IP address and its capacity to handle requests. In the IP ping method, the device will send an ICMP (Internet Control Message Protocol) request data packet to the target host via the IP network and then wait for an echo reply.
  • Traceroute. Using the tracert or traceroute commands, network administrators can send data packets across the network and monitor the path they take. This command also shows the number of jumps taken to reach the host and the duration between jumps. In fact, traceroute can also check multiple paths at once.
  • My Traceroute (MTR). MTR is a latency test tool that combines ping and traceroute. This method is the most detailed. MTR provides real-time information about hops, latency, and packet loss along network paths.

You can perform the network latency test method above using a variety of OS, including Windows, Linux, and macOS.

How to Measure Network Latency

There are two ways to measure latency, namely as Round Trip Time or Time to First Byte.

Round Trip Time (RTT) refers to the time it takes for data packets to travel from client to server and vice versa. Meanwhile, Time to First Byte or TTFB is the length of time it takes for the server to get the first byte of data after the client sends a request.

The unit used to measure latency is milliseconds. When checking website speed, this speed is also often called the ping rate. The lower the ping rate, the lower the latency.

An optimal network usually has an average ping rate of between 50 and 100 milliseconds. A value below 50 is considered a good ping rate. On the other hand, a latency of more than 100 milliseconds means that the ping is quite high, usually resulting in a slow network.

3 Tips to Fix High Latency Issue

This time we will help you learn how to overcome high latency to maximize network connectivity.

Immediately, here’s a way to overcome latency and speed up page loading:

1. CDN

Content Delivery Network or CDN is a series of servers spread across various locations around the world to help speed up the delivery and presentation of website content.

CDN is also known as a content distribution network. Without it, the visitor’s browser will directly connect to the origin server, which is the origin server or computer that hosts the original version of the website file, then request content from it.

As previously explained that distance will affect network latency, visitors who are located far from the origin server may experience problems with website loading speed. Well, a CDN can help solve this problem.

The CDN server will store cache or website content from the origin server. When a visitor wants to access the same website again, the browser will connect to the CDN server closest to the visitor, not the origin server.

This shorter distance eliminates latency issues and allows web pages to load more quickly for visitors.

Apart from that, a CDN can also help distribute traffic, prevent server overload, and improve website security. Not to mention, CDN can save bandwidth consumption which is the main problem of web hosting.

2. Reduce HTTP Requests

Although it can add to the functionality of the website, one of the main causes of high latency is too many scripts and external resources.

When you reference content hosted on another server instead of your own, the browser sends an external HTTP request. This can include requesting content such as media files or JavaScript and CSS.

The data transfer rate for external HTTP requests depends on the quality and performance of third-party servers. If the server is experiencing problems, it is likely that the latency will increase and disrupt the website user experience.

In addition, you can speed up website page loading by implementing other website optimization methods, such as minifying code and optimizing images.

3. Use the Pre-Fetching Technique

The next way to reduce latency is pre-fetching. When working on website coding, developers can insert lines of pre-fetching code to instruct the browser to load certain resources first before other website files.

Well, the pre-fetching technique has three main types, namely:

  • Pre-fetching DNS. When a visitor opens a web page, the browser will perform a DNS lookup for the links on that page. When visitors click on a link that already uses DNS pre-fetching, they don’t have to wait for the DNS lookup to finish because the process has already been done.
  • Pre-fetching links. This allows the browser to download documents that the user might open in the near future. For example, you have enabled pre-fetching for image links. After finishing loading the page, the browser will pre-fetch the image from the image URL, then download it and cache it.
  • Pre-rendering. This process renders the entire page in the background instead of downloading only the important resources for that page. This is done to speed up loading times when a visitor clicks on a link to a previously rendered page.

A number of search engines such as Google use pre-fetching techniques to present optimal UX designs.

After providing a list of search results based on the terms the user typed in, the search engine will pre-fetch the pages the user is most likely to visit, usually the first or second order results.

Conclusion

Latency is the time it takes for data packets to travel from the client device to the server and vice versa. Lower latency indicates good network connectivity, because the process of transmitting data and loading websites means faster.

Some factors that affect latency are distance, web page size, and the software and hardware used for transmission. To measure latency, use ping, traceroute, and MTR (in milliseconds). A good value is between 50-100 milliseconds.

The causes of high latency include DNS server errors to user device problems. The way to deal with high latency is to use a CDN, reduce external HTTP requests, and implement pre-fetching techniques in the website code.

Hopefully this article helps you understand more about what latency is and how to overcome it to maintain website speed. If you still have questions, don’t hesitate to submit them in the comments!