Network Cache


As webpages become more sophisticated, and video becomes higher definition, the volume of digital content transferred over the Internet is increasing. This has resulted in network congestion and problems such as increased latency to display webpages and sudden drops in communication quality such as video playback stopping in the middle of a video.

To address these issues, we are researching technologies that enable comfortable use of network services such as web and video services, including the following themes.


Selection Method of Cache Servers in Anycast CDN

Anycast CDN selects cache servers for content delivery using the Internet’s Anycast delivery function, in which an IP router selects a delivery destination from among multiple destination addresses. However, as the number of cache servers assigned to the same IP address increases, the appropriateness of delivery server selection decreases.

Therefore, this laboratory is working on a technique to optimally configure a diverse set of cache servers using genetic algorithms, taking into account the spatial locality of the popularity of content.

Highly Reliable Cache Control Method Using Low-Reliability Edge Cache

Mobile edge cache (MEC) reduces the rapidly growing amount of mobile traffic by installing cache servers (ES: edge servers) in base stations of 5G/6G wireless networks to cache highly popular content. However, the number of base stations is huge, so low-cost but unreliable ESs are used, and the increase in outage rate due to failures has become an issue.

Therefore, we are working on a content insertion method into ESs that optimizes content availability when increasing the availability of cached content using error coding.

Cache Control Based on Correlation among Web Objects

In recent years, Web pages have become richer and more complex, and Web response times have increased. To reduce web response times, HTTP/2 was approved as a specification by the IETF in 2015, and HTTP/3 was also approved in 2020.

Although HTTP/2 and HTTP/3 reduce latency by fetching multiple objects in parallel over a single TCP session, parallel delivery is only possible for a set of web objects retrieved from the same delivery server. Therefore, when a small number of objects are retrieved from many delivery servers in a distributed manner, the effectiveness of HTTP/2 and HTTP/3 parallel delivery is limited.

To this end, our laboratory is working on cache control techniques that prioritize web objects to be left in the cache based on correlation between web objects.

Content Recommendation Technology Considering Cache Status

For content delivery services such as Netflix, content recommendations based on user preferences have become an important part of content requests. However, most content is delivered using a CDN, and conventional recommendation schemes select recommended content without considering the cache state of the CDN, which leads to the problem of cache performance degradation due to recommendations.

Therefore, we are working on a content recommendation method that takes into account both user preferences and cache conditions.

Caching Techniques at Mobile Terminals Using Deep Learning

More and more people are watching videos on mobile devices such as smartphones. However, due to the large size of video data, communication resources on mobile networks are expected to become scarce. Therefore, it is effective to utilize mobile terminals as caches to distribute content among mobile terminals. However, since the memory size of mobile devices is small, it is necessary to appropriately select which content to store in mobile devices based on future demand for the contents.

To this end, we are working on cache technology that uses deep learning to predict highly popular content in future at destinations of mobile devices and selects content to be stored in the mobile device.

improving scalability of information-Centric networking

Information-Centric Networking (ICN) is gaining attention as a next-generation network for efficient content delivery. ICN makes it possible to request content directly by content name, without identifying the source delivering content. In addition, by delivering content from routers on the request forwarding path, efficient content delivery is possible with less network resources consumed.

In ICN, however, destination information for content names must be stored in the forwarding table at routers, and a huge number of contents on the network causes the size of the forwarding table to explode. Therefore, to realize a large-scale ICN with improving the scalability of ICN, suppressing the required size of the forwarding table is an important issue.

This is why our laboratory is working on research to effectively reduce the size of the forwarding table of ICN routers. For example, we are working on research on using cache servers of Content Delivery Network (CDN) to place content originals so as to improve the aggregation effect of forwarding table entries, and on controlling content delivery routes.

Migration Techniques of Information-Centric Networking

While the Information-Centric Networking (ICN) is gaining attention as a new network that will streamline IoT data and content delivery, it is unrealistic to expect the entire part of the current Internet to be replaced by ICN all at once, and ICN is expected to be introduced gradually, in parts.

Therefore, ICN and the existing Internet (IP network) are expected to be mixed. Therefore, techniques are needed to forward packets between routers with different data forwarding methods; ICN transfers packets based on content names, while IP networks transfer packets based on IP addresses.

To this end, we are working on technologies related to gateway functions that enable packet forwarding between ICN routers and IP routers.

Ecosystem Analysis of Information-Centric Networking

Althogh the introduction of Information-Centric Networking (ICN) will change traffic exchange patterns among Internet Service Providers (ISPs), the introduction of ICN will affect revenue of ISPs because they pay transit fees based on the amount of traffic between ISPs. Therefore, in order to clarify the potential for ICN deployment, it is necessary to quantitatively analyze the impact of ICN deployment on the revenue of ISPs.

To this end, we are working on a study to analyze the impact of ICN deployment on ISP revenues by modeling the amount of traffic exchange between ISPs when ICN is deployed for hierarchical ISP topologies and by multi-agent simulation using actual topologies between ISPs. We are also engaged in research on measures to promote the widespread use of ICN.