Furthermore, few works analyze the performance of the service function chain on a practical cloud. Most of the existing works were not conducted in a cloud computing environment, and fewer studies focused on the usage of enhanced platform awareness (EPA) features. However, virtualization degrades the performance of VNF, resulting in violating the performance requirements of the telecom industry. With the integration of the NFV cloud platform, telecom operators decouple network functions from the dedicated hardware and run virtualized network functions (VNFs) on the cloud. Since ETSI introduced the architectural framework of network function virtualization (NFV), telecom operators have paid more attention to the synergy of NFV and cloud computing. We chose Suricata, an intrusion prevention system (IPS), representing a widely deployed, typical network application as our primary subject of investigation. Our paper provides a comprehensive study on the impact of various components on latency and reliability, such as the central processing unit (CPU), the Linux Kernel, the network card, virtualization features, and the networking application itself. Both, our measurement and the investigated platforms, rely on Linux running on off-the-shelf components. Assessing reliability and quantifying low latency is equally challenging, as sub-microsecond latency and 1/105\documentclass loss probability leave little room for error. This work picks up the challenge and investigates a single-server network function-a building block for end-to-end low-latency network applications. Real-time systems often require additional effort using non-standard interfaces, requiring customized software, or providing low throughput figures. Latency, predictability, and reliability are distinct qualities realized in real-time systems. Critical services may even prioritize latency, predictability, and reliability at the expense of throughput to avoid detrimental effects on service operation. However, throughput may be considered insignificant if data is outdated or networks become unpredictable or unreliable. Throughput is a commonly used performance indicator for networks. This Accelerator overcomes, to some extent, the limitations of commercial NICs when oriented to microservice architectures. In addition, it can forward packets with low latency close to that of the current state-of-the-art ovs-DPDK. Experiments conducted on a 100 Gbps FPGA show that the Accelerator can support the multi-queue transmission with various packet sizes, define the forwarding behavior, and almost approach the line rate on an 8-core FPGA device. To improve the sending and receiving efficiency of network node data, a driver adapted to the FPGA accelerator is designed to realize zero-copy. A PTP hardware clock is added to collaborate with the queue management unit to control the deterministic delivery. The Accelerator relies on the instantiated 1000 queues and the queue management unit to extend the rule-based RSS algorithm for the serverless-friendly programmability of packet distribution. To ensure that each microservice node handles requests efficiently, flexibly, and precisely, this paper proposes a programmable deterministic multi-queue FPGA Accelerator. When handling massive microservice requests, the commercial NIC shows limitations in three aspects: deterministic, programmability, and data copy. The unbundling of services has led to exponential growth in the size of APIs. With the expansion of network scales, the B/S architecture of monolithic applications is gradually being replaced by microservices.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |