Skip to main content
Karl-Johan Grinnemo
  • Universitetsgatan 2
    SE-65188
    Karlstad
    SWEDEN
  • +46(0)547002440

Karl-Johan Grinnemo

This paper presents a new task-scheduling strategy, SATS, for Mobile Edge Computing (MEC). SATS uses a simple simulated annealing-based method for scheduling tasks and shows that it can be a promising solution for online task scheduling... more
This paper presents a new task-scheduling strategy, SATS, for Mobile Edge Computing (MEC). SATS uses a simple simulated annealing-based method for scheduling tasks and shows that it can be a promising solution for online task scheduling in a MEC architecture. The paper evaluates three types of predictors: neutral, conservative, and optimistic, and concludes that using a conservative predictor that overestimates the number of service requests leads to the best performance in terms of higher acceptance rates and shorter processing times. With its Simulated Annealing-based method, SATS offers an acceptance ratio that is only 5% lower than what it could have been if it had known the frequency of service request arrivals beforehand. The simplicity and efficiency of the SATS strategy are highlighted as it deviates less than 20% from this acceptance ratio in all conducted experiments. INDEX TERMS online task scheduling, simulated annealing, mobile edge computing, task offloading
This paper studies the impact of tunable parametersin the NB-IoT stack on the energy consumption of a user equipment(UE), e.g., a wireless sensor. NB-IoT is designed to enablemassive machine-type c ...
The lack of consideration for application delay requirements in standard loss-based congestion control algorithms (CCAs) has motivated the proposal of several alternative CCAs. As such, Copa is one of the most recent and promising CCAs,... more
The lack of consideration for application delay requirements in standard loss-based congestion control algorithms (CCAs) has motivated the proposal of several alternative CCAs. As such, Copa is one of the most recent and promising CCAs, and it has attracted attention from both academia and industry. The delay performance of Copa is governed by a mostly static latency-throughput tradeoff parameter, δ. However, a static δ parameter makes it difficult for Copa to achieve consistent delay and throughput over a range of bottleneck bandwidths. In particular, the coexistence of 4G and 5G networks and the wide range of bandwidths experienced in NG-RANs can result in inconsistent CCA performance. To this end, we propose a modification to Copa, Copa-D, that dynamically tunes δ to achieve a consistent delay performance. We evaluate the modification over emulated fixed, 4G, and 5G bottlenecks. The results show that Copa-D achieves consistent delay with minimal impact on throughput in fixed capacity bottlenecks. Copa-D also allows a more intuitive way of specifying the latency-throughput tradeoff and achieves more accurate and predictable delay in variable cellular bottlenecks.
Protection and automation in a smart grid environmentoften have stringent real-time communication requirementsbetween devices within a substation as well as between distantlylocated substations. Th ...
This deliverable provides a final report on the work on transport protocol enhancements done inWork Package 3. First, we report on the extensions made to the SCTP protocol that turn it into a viabl ...
To mitigate delay spikes during transmission ofbursty signaling traffic, concurrent multipath transmission(CMT) over several paths in parallel could be an option. Still,unordered delivery is a well ...
Distributed data center hosts telco virtual network functions, mixing workloads that require data transport through transport protocols with either low end-to-end latency or large bandwidth for high throughput, e.g., from tough... more
Distributed data center hosts telco virtual network functions, mixing workloads that require data transport through transport protocols with either low end-to-end latency or large bandwidth for high throughput, e.g., from tough requirements in 5G use cases. A trend is the use relatively inexpensive, off-the-shelf switches in data center networks, where the dominated transport traffic is TCP traffic. Today’s TCP protocol will not be able to meet such requirements. The transport protocol evolution is driven by transport performance (latency and throughput) and robust enhancements in data centers, which include new transport protocols and protocol extensions such as DCTCP, MPTCP and QUIC protocols and lead to intensive standardization works and contributions to 3GPP and IETF. By implementing ECN based congestion control instead of the packet-loss based TCP AIMD congestion control algorithm, DCTCP not only solves the latency issue in TCP congestion control caused by the switch buffer bloating but also achieves an improved performance on the packet loss and throughput. The DCTCP can also co-exist with normal TCP by applying a modern coupled queue management algorithm in the switches of DC networks, which fulfills IETF L4S architecture. MPTCP is an extension to TCP, which can be implemented in DC’s Fat tree architecture to improve transport throughput and shorten the latency by mitigating the bandwidth issue caused by TCP connection collision within the data center. The QUIC is a reliable and multiplexed transport protocol over UDP transport, which includes many of the latest transport improvements and innovation, which can be used to improve the transport performance on streaming media delivery. The Clos topology is a commonly used network topology in a distributed data center. In the Clos architecture, an over-provisioned fabric cannot handle full wire-speed traffic, thus there is a need to have a mechanism to handle overload situations, e.g., by scaling out the fabric. However, this will introduce more end-to-end latency in those cases the switch buffer is bloated, and will cause transport flow congestion. In this survey paper, DCTCP, MPTCP and QUIC are discussed as solutions for transport performance enhancement for 5G mobile networks to avoid the transport flow congestion caused by the switch buffer bloating from overloaded switch queue in data centers. Working paper, December 2017High Quality Networked Services in a Mobile World (HITS
This document describes the design and implementation of the 5GENESIS Monitoring & Analytics (M&A) framework (Release A), developed within Task T3.3 of the Project work plan.Fifth Generation End-to-End Network,... more
This document describes the design and implementation of the 5GENESIS Monitoring & Analytics (M&A) framework (Release A), developed within Task T3.3 of the Project work plan.Fifth Generation End-to-End Network, Experimentation, System Integration, and Showcasin
This report covers a project in the course Computer Engineering Project, DVAE08, at Karlstad University. The aim of the project was to modify an already existing solution for selecting the most fitting path for known traffic online, with... more
This report covers a project in the course Computer Engineering Project, DVAE08, at Karlstad University. The aim of the project was to modify an already existing solution for selecting the most fitting path for known traffic online, with a proactive approach instead of a reactive, called Socket Intents. The purpose of the modified version is to make the previous solution compatible with the transport protocol SCTP. This solution consists of three new implemented components; a header parser, a sniffer, and a query manager. The header parser and sniffer receive packets from the traffic and send them to one another. The query manager handles queries from the policies to the sniffer, as well as the response. Together, these components will gather information about the state of the network, and select the most fitting path that fulfill application needs. The results achieved from the modification are good work for the SCTP one-to-one type.
The Internet of Things (IoT) has emerged as a fundamental cornerstone in the digitalization of industry and society. Still, IoT devices' limited processing and memory capacities pose a problem for conducting complex and... more
The Internet of Things (IoT) has emerged as a fundamental cornerstone in the digitalization of industry and society. Still, IoT devices' limited processing and memory capacities pose a problem for conducting complex and time-sensitive computations such as AI-based shop floor monitoring or personalized health tracking on these devices, and offloading to the cloud is not an option due to excessive delays. Edge computing has recently appeared to address the requirements of these IoT applications. This paper formulates the scheduling of tasks between IoT devices, edge servers, and the cloud in a three-layer Mobile Edge Computing (MEC) architecture as a Mixed-Integer Linear Programming (MILP) problem. The paper proposes a simulated annealing-based task scheduling technique and demonstrates that it schedules tasks almost as time-efficient as if the MILP problem had been solved with a mixed integer programming optimization package; however, at a fraction of the cost in terms of CPU, memory, and network resources. Also, the paper demonstrates that the proposed task scheduling technique compares favorably in terms of efficiency, resource consumption, and timeliness with previously proposed techniques based on heuristics, including genetic programming.
Ossification of the Internet transport-layer architecture is a significant barrier to innovation of the Internet. Such innovation is desirable for many reasons. Current applications often need to i ...
The Transport Services architecture [I-D.ietf-taps-arch] defines a system that allows applications to use transport networking protocols flexibly. This document serves as a guide to implementation on how to build such a system.
This document presents the core transport system in NEAT, as used for development of the reference implementation of the NEAT System. The document describes the components necessary to realise the ...
Cellular Internet of Things (CIoT) is a Low-Power Wide-Area Network (LPWAN) technology. It aims for cheap, lowcomplexity IoT devices that enable large-scale deployments and wide-area coverage. Moreover, to make large-scale deployments of... more
Cellular Internet of Things (CIoT) is a Low-Power Wide-Area Network (LPWAN) technology. It aims for cheap, lowcomplexity IoT devices that enable large-scale deployments and wide-area coverage. Moreover, to make large-scale deployments of CIoT devices in remote and hard-to-access locations possible, a long device battery life is one of the main objectives of these devices. To this end, 3GPP has defined several energysaving mechanisms for CIoT technologies, not least for the Narrow-Band Internet of Things (NB-IoT) technology, one of the major CIoT technologies. Examples of mechanisms defined include CONNECTED-mode DRX (cDRX), Release Assistance Indicator (RAI), and Power Saving Mode (PSM). This paper considers the impact of the essential energy-saving mechanisms on minimizing the energy consumption of NB-IoT devices, especially the cDRX and RAI mechanisms. The paper uses a purpose-built NB-IoT simulator that has been tested in terms of its built-in energy-saving mechanisms and validated with realworld NB-IoT measurements. The simulated results show that it is possible to save 70%-90% in energy consumption by enabling the cDRX and RAI. In fact, the results suggest that a battery life of 10 years is only achievable provided the cDRX, RAI, and PSM energy-saving mechanisms are correctly configured and used.
Due to cloud computing's limitations, edge computing has emerged to address computation-intensive and timesensitive applications. In edge computing, users can offload their tasks to edge servers. However, the edge servers'... more
Due to cloud computing's limitations, edge computing has emerged to address computation-intensive and timesensitive applications. In edge computing, users can offload their tasks to edge servers. However, the edge servers' resources are limited, making task scheduling everything but easy. In this paper, we formulate the scheduling of tasks between the user equipment, the edge, and the cloud as a Mixed-Integer Linear Programming (MILP) problem that aims to minimize the total system delay. To solve this MILP problem, we propose an Enhanced Healed Genetic Algorithm solution (EHGA). The results with EHGA are compared with those of CPLEX and a few heuristics previously proposed by us. The results indicate that EHGA is more accurate and reliable than the heuristics and quicker than CPLEX at solving the MILP problem.
In this paper, we aim at tackling the scalability ofthe Mobility Management Entity (MME) which is one of the key control plane entities of the 4G Evolved Packet Core (EPC). One of the solutions to this problem is to virtualize the MME by... more
In this paper, we aim at tackling the scalability ofthe Mobility Management Entity (MME) which is one of the key control plane entities of the 4G Evolved Packet Core (EPC). One of the solutions to this problem is to virtualize the MME by adopting the Network Function Virtualization (NFV) technology and deploy it as a pool of virtualized instances (vMMEs) with a frontend load balancer. Although several designs have been proposed, a large part of them does not consider the load balancing aspect. To this end, we propose using a Weighted Round Robin (WRR) algorithm for balancing signaling load in a MME architecture. We implement and compare its performance to two currently used algorithms: random and round robin. Experimental results show that the WRR algorithm can significantly reduce the control plane latency as compared to the other two schemes.High Quality Networked Services in a Mobile World (HITS
The NEAT System offers an enhanced API for applications that disentangles them from the actual transport protocol being used. The system also enables applications to communicate their service requi ...
5G mobile networks introduce the concept of network slicing, the functionality of creating virtual networks on top of shared physical infrastructure. Such slices can be tailored to various vertical services. A single User Equipment (UE)... more
5G mobile networks introduce the concept of network slicing, the functionality of creating virtual networks on top of shared physical infrastructure. Such slices can be tailored to various vertical services. A single User Equipment (UE) may be served by multiple network slice instances simultaneously, which opens up the possibility of dynamically steering traffic in response to the specific needs of individual applications -- and as a reaction to events inside the network, e.g., network failures. This paper presents the PoLicy-based Architecture for Network Slicing (PLANS). In this policy framework, the network slice management entity in the 5G core and the UE can cooperatively optimize the usage of the available network slices via policy systems installed both inside the network and on the UE. The PLANS architecture has been implemented and evaluated in a 5G testbed. For two different case studies, we show how such a system can be leveraged to provide optimized services and increased robustness against network failures. First, we consider a drone autopilot scenario, and demonstrate how PLANS can reduce network-slice recovery time by more than 90%. Second, we illustrate for a 360\degree video streaming scenario how PLANS can help prevent video quality degradation when a network slice becomes unavailable.
Cellular Internet of Things (CIoT) is a Low-Power Wide-Area Network (LPWAN) technology. It aims for cheap, lowcomplexity IoT devices that enable large-scale deployments and wide-area coverage. Moreover, to make large-scale deployments of... more
Cellular Internet of Things (CIoT) is a Low-Power Wide-Area Network (LPWAN) technology. It aims for cheap, lowcomplexity IoT devices that enable large-scale deployments and wide-area coverage. Moreover, to make large-scale deployments of CIoT devices in remote and hard-to-access locations possible, a long device battery life is one of the main objectives of these devices. To this end, 3GPP has defined several energysaving mechanisms for CIoT technologies, not least for the Narrow-Band Internet of Things (NB-IoT) technology, one of the major CIoT technologies. Examples of mechanisms defined include CONNECTED-mode DRX (cDRX), Release Assistance Indicator (RAI), and Power Saving Mode (PSM). This paper considers the impact of the essential energy-saving mechanisms on minimizing the energy consumption of NB-IoT devices, especially the cDRX and RAI mechanisms. The paper uses a purpose-built NB-IoT simulator that has been tested in terms of its built-in energy-saving mechanisms and validated with realworld NB-IoT measurements. The simulated results show that it is possible to save 70%-90% in energy consumption by enabling the cDRX and RAI. In fact, the results suggest that a battery life of 10 years is only achievable provided the cDRX, RAI, and PSM energy-saving mechanisms are correctly configured and used.
Research Interests:
One of the challenges in Delay Tolerant Wireless Sensor Networks (DT-WSN), is to handle situations where the available buffer space is insufficient– the buffer management problem. Although several buffer management algorithms have been... more
One of the challenges in Delay Tolerant Wireless Sensor Networks (DT-WSN), is to handle situations where the available buffer space is insufficient– the buffer management problem. Although several buffer management algorithms have been proposed for DT-WSNs, to the best of our knowledge, there is no comprehensive study on the effects different factors have on their performance, and which evaluates the relative performance of these algorithms in different contexts. This paper evaluates in a fixed-factor factorial experiment the performance in terms of latency and Quality of Information (QoI) of four representative buffer management algorithms for DT-WSNs; two traditional, FiFO and Random, and two QoI-based algorithms– one proposed by Humber and Ngai and the SmartGap algorithm. The evaluation suggests that the buffer management algorithm in combination with employed routing protocol and the sensor node buffer sizes have a significant impact on latency, while the obtained QoI rather depends on the characteristics of the transported data and the routing protocol, provided a single-copy routing protocol is used. Moreover, the evaluation suggests that QoI-based buffer management algorithms do offer improved QoI, with an 31% improvement in MAE for SmartGap compared to FIFO. However, they do so at the expense of higher latency, with SmartGap giving a 60% higher latency than FIFO on average.
Mobile Internet usage has increased significantly over the last decade and it is expected to grow to almost 4 billion users by 2020. Even after the great effort dedicated to improving the performance, there still exist unresolved... more
Mobile Internet usage has increased significantly over the last decade and it is expected to grow to almost 4 billion users by 2020. Even after the great effort dedicated to improving the performance, there still exist unresolved questions and problems regarding the interaction between TCP and mobile broadband technologies such as LTE. This chapter presents a thorough investigation of the behavior of distinct TCP implementation under various network conditions in different LTE deployments including to which extent TCP is capable of adapting to the rapid variability of mobile networks under different network loads, with distinct flow types, during start-up phase and in mobile scenarios at different speeds. Loss-based algorithms tend to completely fill the queue, creating huge standing queues and inducing packet losses both under stillness and mobility circumstances. On the other side delay-based variants are capable of limiting the standing queue size and decreasing the amount of pac...
Currently, improving the performance of Big Data in general and velocity in particular is challenging due to the inefficiency of current network management, and the lack of coordination between the application layer and the network layer... more
Currently, improving the performance of Big Data in general and velocity in particular is challenging due to the inefficiency of current network management, and the lack of coordination between the application layer and the network layer to achieve better scheduling decisions, which can improve the Big Data velocity performance. In this chapter, we discuss the role of recently emerged software defined networking (SDN) technology in helping the velocity dimension of Big Data. We start the chapter by providing a brief introduction of Big Data velocity and its characteristics and different modes of Big Data processing, followed by a brief explanation of how SDN can overcome the challenges of Big Data velocity. In the second part of the chapter, we describe in detail some proposed solutions which have applied SDN to improve Big Data performance in term of shortened processing time in different Big Data processing frameworks ranging from batch-oriented, MapReduce-based frameworks to real-time and stream-processing frameworks such as Spark and Storm. Finally, we conclude the chapter with a discussion of some open issues.
Due to cloud computing's limitations, edge computing has emerged to address computation-intensive and timesensitive applications. In edge computing, users can offload their tasks to edge servers. However, the edge servers' resources are... more
Due to cloud computing's limitations, edge computing has emerged to address computation-intensive and timesensitive applications. In edge computing, users can offload their tasks to edge servers. However, the edge servers' resources are limited, making task scheduling everything but easy. In this paper, we formulate the scheduling of tasks between the user equipment, the edge, and the cloud as a Mixed-Integer Linear Programming (MILP) problem that aims to minimize the total system delay. To solve this MILP problem, we propose an Enhanced Healed Genetic Algorithm solution (EHGA). The results with EHGA are compared with those of CPLEX and a few heuristics previously proposed by us. The results indicate that EHGA is more accurate and reliable than the heuristics and quicker than CPLEX at solving the MILP problem.
Tactile Internet defines applications for remotely controlling and manipulating critical devices that require perceived real-time operation with additional demanding requirements like reliability. These use cases with stringent... more
Tactile Internet defines applications for remotely controlling and manipulating critical devices that require perceived real-time operation with additional demanding requirements like reliability. These use cases with stringent requirements demand adequate transport protocols to take advantage of the underlying possibilities. Traditional transport-layer solutions like TCP and UDP are no longer sufficient, hence novel protocols are being developed to support these applications. In this paper, we present an implementation and evaluation of the Multiconnection Tactile Internet Protocol (MTIP), a transport layer proposal to support these communications. MTIP uses application and network status information to perform an intelligent selection of paths in order to improve reliability and latency. In our evaluations, we study how the different configurations of the MTIP algorithm affect this selection and we see a direct trade-off where, with more restrictive thresholds, MTIP can increase the packets received correctly at the cost of sending extra duplicate packets.
We present the latency-aware multipath scheduler ZQTRTT that takes advantage of the multipath opportunities in information-centric networking. The goal of the scheduler is to use the (single) lowes ...
The strict low-latency requirements of applications such as virtual reality, online gaming, etc., can not be satisfied by the current Internet. This is due to the characteristics of classic TCP such as Reno and TCP Cubic which induce high... more
The strict low-latency requirements of applications such as virtual reality, online gaming, etc., can not be satisfied by the current Internet. This is due to the characteristics of classic TCP such as Reno and TCP Cubic which induce high queuing delays when used for capacity-seeking traffic, which in turn results in unpredictable latency. The Low Latency, Low Loss, Scalable throughput (L4S) architecture addresses this problem by combining scalable congestion controls such as DCTCP and TCP Prague with early congestion signaling from the network. It defines a Dual Queue Coupled (DQC) AQM that isolates low-latency traffic from the queuing delay of classic traffic while ensuring the safe co-existence of scalable and classic flows on the global Internet. In this paper, we benchmark the DualPI2 scheduler, a reference implementation of DQC AQM, to validate some of the experimental result(s) reported in the previous works that demonstrate the co-existence of scalable and classic congestion...
The Cellular Internet of Things (CIoT), a new paradigm, paves the way for a large-scale deployment of IoT devices. CIoT promises enhanced coverage and massive deployment of low-cost IoT devices with an expected battery life of up to 10... more
The Cellular Internet of Things (CIoT), a new paradigm, paves the way for a large-scale deployment of IoT devices. CIoT promises enhanced coverage and massive deployment of low-cost IoT devices with an expected battery life of up to 10 years. However, such a long battery life can only be achieved provided the CIoT device is configured with energy efficiency in mind. This paper conducts a comprehensive survey on energy-saving solutions in 3GPP-based CIoT networks. In comparison to current studies, the contribution of this paper is the classification and an extensive analysis of existing energysaving solutions for CIoT, e.g., configuration of particular parameter values and software modifications of transport-or radio-layer protocols, while also stressing key parameters impacting the energy consumption such as the frequency of data reporting, discontinuous reception cycles (DRX), and Radio Resource Control (RRC) timers. In addition, we discuss shortcomings, limitations, and possible opportunities which can be investigated in the future to reduce the energy consumption of CIoT devices.
The currently rather heterogeneous wireless landscape makes handover between different network technologies, so-called vertical handover, a key to a continued success for wireless Internet access. Recently, an extension to the Stream... more
The currently rather heterogeneous wireless landscape makes handover between different network technologies, so-called vertical handover, a key to a continued success for wireless Internet access. Recently, an extension to the Stream Control Transmission Protocol (SCTP)-the Dynamic Address Reconfiguration (DAR) extension-was standardized by IETF. This extension enables the use of SCTP for vertical handover. Still, the way vertical handover works in SCTP with DAR makes it less suitable for real-time traffic. Particularly, it takes a significant amount of time for the traffic to ramp up to full speed on the handover target path. In this paper, we study the implications of an increased initial congestion window for real-time traffic on the handover target path when competing traffic is present. The results clearly show that an increased initial congestion window could significantly reduce the transfer delay for real-time traffic, provided the fair share of the available capacity on the...
Research Interests:
The next-generation networks (5G) aims to support services that demand strict requirements such as low-latency, high throughput, and high availability. Telecom operators have adopted Network Functions Virtualization (NFV) to virtualize... more
The next-generation networks (5G) aims to support services that demand strict requirements such as low-latency, high throughput, and high availability. Telecom operators have adopted Network Functions Virtualization (NFV) to virtualize the network functions and deploy at distributed cloud datacenters. Deploying virtual network functions (VNFs) close to the end-user can reduce Internet latency. However, network congestion in telco cloud datacenters can result in increased latency, low network utilization and a drop of throughput. The existing protocols are not capable of utilizing the multiple paths offered by datacenter topologies e.g., DCTCP; require a major architectural change and face deployment challenges e.g., NDP; or increase flow completion times of short flows e.g., MPTCP. To address this, we propose a multipath transport for telco cloud datacenters called coupled multipath datacenter TCP, MDTCP. MDTCP evolves MPTCP subflows to employ ECN signals to react to congestion before queue overflow, offering both reduced latency and higher network utilization. The evaluation of MDTCP with simulated traffic indicates comparable or lower flow completion times compared with DCTCP and NDP for most of the studied traffic scenarios. The simulation results imply that MDTCP could give better throughput for telco traffic and at the same time be as fair as MPTCP in datacenters.
Network Function Virtualization (NFV) is a promising solution for telecom operators and service providers to improve business agility, by enabling a fast deployment of new services, and by making it possible for them to cope with the... more
Network Function Virtualization (NFV) is a promising solution for telecom operators and service providers to improve business agility, by enabling a fast deployment of new services, and by making it possible for them to cope with the increasing traffic volume and service demand. NFV enables virtualization of network functions that can be deployed as virtual machines on general purpose server hardware in cloud environments, effectively reducing deployment and operational costs. To benefit from the advantages of NFV, virtual network functions (VNFs) need to be provisioned with sufficient resources and perform without impacting network quality of service (QoS). To this end, this paper proposes a model for VNFs placement and provisioning optimization while guaranteeing the latency requirements of the service chains. Our goal is to optimize resource utilization in order to reduce cost satisfying the QoS such as end-to-end latency. We extend a related VNFs placement optimization with a fine-grained latency model including virtualization overhead. The model is evaluated with a simulated network and it provides placement solutions ensuring the required QoS guarantees.
The stability and performance of the Internet to date have in a large part been due to the congestion control mechanism employed by TCP. However, while the TCP conges-tion control is appropriate for traditional applications such as bulk... more
The stability and performance of the Internet to date have in a large part been due to the congestion control mechanism employed by TCP. However, while the TCP conges-tion control is appropriate for traditional applications such as bulk data transfer, it has been found less than ideal for multimedia applications. In particular, audio and video streaming applications have difficulties managing the rate halving performed by TCP in response to congestion. To this end, the majority of multimedia applications use either a congestion control scheme which reacts less drastic to congestion and therefore often is more aggressive than TCP, or, worse yet, no congestion control whatsoever. Since the Internet community strongly fears that a rapid deployment of multimedia applica-tions which do not behave in a fair and TCP-friendly manner could endanger the current stability and performance of the Internet, a broad spectrum of TCP-friendly congestion control schemes have been proposed. In this re...
This document presents the core transport system in NEAT, as used for development of the reference implementation of the NEAT System. The document describes the components necessary to realise the basic Transport Services provided by the... more
This document presents the core transport system in NEAT, as used for development of the reference implementation of the NEAT System. The document describes the components necessary to realise the basic Transport Services provided by the NEAT User API, with the description of a set of NEAT building blocks and their related design choices. The design of this core transport system, which is the final product of Work Package 2, is driven by the Transport Services and API design from Task 1.4, and in close coordination with the overall NEAT architecture defined in Task 1.2. To realise the Transport Services provided by the API, a set of transport functions has to be provided by the NEAT Core Transport System. These functions take the form of several building blocks, or NEAT Components, each representing an associated implementation activity. Some components are needed to ensure the basic operation of the NEAT System— e.g., a NEAT Flow Endpoint, a callback-based NEAT API Framework, the N...
In recent years, Internet and IP technologies have made inroads into almost every communication market ranging from best-effort services such as email and Web, to soft real-time applications such as VoIP, IPTV, and video. However,... more
In recent years, Internet and IP technologies have made inroads into almost every communication market ranging from best-effort services such as email and Web, to soft real-time applications such as VoIP, IPTV, and video. However, providing a transport service over IP that meets the timeliness and availability requirements of soft real-time applications has turned out to be a complex task. Although network solutions such as IntServ, DiffServ, MPLS, and VRRP have been suggested, these solutions many times fail to provide a transport service for soft real-time applications end to end. Additionally, they have so far only been modestly deployed. In light of this, this thesis considers transport protocols for soft real-time applications. Part I of the thesis focuses on the design and analysis of transport protocols for soft realtime multimedia applications with lax deadlines such as image-intensive Web applications. Many of these applications do not need a completely reliable transport s...
Information-centric networking (ICN) with its design around named-based forwarding and in-network caching holds great promises to become a key architecture for the future Internet. Many proposed ICN hop-by-hop congestion control schemes... more
Information-centric networking (ICN) with its design around named-based forwarding and in-network caching holds great promises to become a key architecture for the future Internet. Many proposed ICN hop-by-hop congestion control schemes assume a fixed and known link capacity, which rarely — if ever — holds true for wireless links. Firstly, we demonstrate that although these congestion control schemes are able to fairly well utilise the available wireless link capacity, they greatly fail to keep the delay low. In fact, they essentially offer the same delay as in the case with no hop-by-hop, only end-to-end, congestion control. Secondly, we show that by complementing these schemes with an easy-to-implement, packet-train capacity estimator, we reduce the delay to a level significantly lower than what is obtained with only end-to-end congestion control, while still being able to keep the link utilisation at a high level.
Ideally, network applications should be able to select an appropriate transport solution from among available transport solutions. However, at present, there is no agreed-upon way to do this. In fact, there is not even an agreed-upon way... more
Ideally, network applications should be able to select an appropriate transport solution from among available transport solutions. However, at present, there is no agreed-upon way to do this. In fact, there is not even an agreed-upon way for a source end host to determine if there is support for a particular transport along a network path. This draft addresses these issues, by proposing a Happy Eyeballs framework. The proposed Happy Eyeballs framework enables the selection of a transport solution that according to application requirements, pre-set policies, and estimated network conditions is the most appropriate one. Additionally, the proposed framework makes it possible for an application to find out whether a particular transport is supported along a network connection towards a specific destination or not.
Ossication is one of the most serious problem facing the future of the Internet. The most commonly used protocols today can not keep up with the increasing demand for better network performance of ...
In the fifth generation (5G) mobile networks, the number of user-plane gateways has increased, and, in contrast to previous generations they can be deployed in a decentralized way and auto-scaled independently from their control-plane... more
In the fifth generation (5G) mobile networks, the number of user-plane gateways has increased, and, in contrast to previous generations they can be deployed in a decentralized way and auto-scaled independently from their control-plane functions. Moreover, the performance of the user-plane gateways can be boosted with the adoption of advanced acceleration techniques such as Vector Packet Processing (VPP). However, the increased number of user-plane gateways has also made load balancing a necessity, something we find has so far received little attention. Moreover, the introduction of VPP poses a challenge to the design of the auto-scaling of user-plane gateways. In this paper, we address these two challenges by proposing a novel performance indicator for making better auto-scaling decisions, and by proposing three new dynamic load-balancing algorithms for the user plane of a VPP-based, softwarized 5G network. The novel performance indicator is estimated based on the VPP vector rate and is used as a threshold for the auto-scaling process. The dynamic load-balancing algorithms take into account the number of bearers allocated for each user-plane gateway and their VPP vector rate. We validate and evaluate our proposed solution in a 5G testbed. Our experiment results show that the scaling helps to reduce the packet latency for the user-plane traffic, and that our proposed load-balancing algorithms can give a better distribution of traffic load as compared to traditional static algorithms.

And 142 more

Mobile internet usage has significantly raised over the last decade and it is expected to grow to almost 4 billion users by 2020. Even after the great effort dedicated to the improvement of the performance, there still exist unresolved... more
Mobile internet usage has significantly raised over the last decade and it is expected to grow to almost 4 billion users by 2020. Even after the great effort dedicated to the improvement of the performance, there still exist unresolved questions and problems regarding the interaction between TCP and mobile broadband technologies such as LTE. This chapter collects the behavior of distinct TCP implementation under various network conditions in different LTE deployments including to which extent the performance of TCP is capable of adapting to the rapid variability of mobile networks under different network loads, with distinct flow types, during start-up phase and in mobile scenarios at different speeds. Loss-based algorithms tend to completely fill the queue, creating huge standing queues and inducing packet losses both under stillness and mobility circumstances. On the other side delay-based variants are capable of limiting the standing queue size and decreasing the amount of packets that are dropped in the eNodeB, but they are not able under some circumstances to reach the maximum capacity. Similarly, under mobility in which the radio conditions are more challenging for TCP, the loss-based TCP implementations offer better throughput and are able to better utilize available resources than the delay-based variants do. Finally, CUBIC under highly variable circumstances usually enters congestion avoidance phase prematurely, provoking a slower and longer start-up phase due to the use of Hybrid Slow-Start mechanism. Therefore, CUBIC is unable to efficiently utilize radio resources during shorter transmission sessions.
Research Interests:
In-Band Network Telemetry (INT) is a novel framework for collecting telemetry items and switch internal state information from the data plane at line rate. With the support of programmable data planes and programming language P4, switches... more
In-Band Network Telemetry (INT) is a novel framework for collecting telemetry items and switch internal state information from the data plane at line rate. With the support of programmable data planes and programming language P4, switches parse telemetry instruction headers and determine which telemetry items to attach using custom metadata. At the network edge, telemetry information is removed and the original packets are forwarded while telemetry reports are sent to a distributed stream processor for further processing by a network monitoring platform. In order to avoid excessive load on the stream processor, telemetry items should not be sent for each individual packet but rather when certain events are triggered. In this paper, we develop a programmable INT event detection mechanism in P4 that allows customization of which events to report to the monitoring system, on a per-flow basis, from the control plane. At the stream processor, we implement a fast INT report collector using the kernel bypass technique AF XDP, which parses telemetry reports and streams them to a distributed Kafka cluster, which can apply machine learning, visualization and further monitoring tasks. In our evaluation, we use real-world traces from different data center workloads and show that our approach is highly scalable and significantly reduces the network overhead and stream processor load due to effective event pre-filtering inside the switch data plane. While the INT report collector can process around 3 Mpps telemetry reports per core, using event pre-filtering increases the capacity by 10-15x. I. INTRODUCTION Operations, Administration, and Management (OAM) refers to protocols, tools and mechanisms that help network operators in fault indication, performance monitoring, security management , diagnostic functions, accounting, performance monitoring , configuration and service provisioning. In traditional carrier networks, OAM tools such as SNMP and OWAMP-Test are used, however, these tools have been proven inadequate for SDN-NFV data centers. They are not scalable and cannot provide fine-grained, real-time information about the overall performance of the data center infrastructure [1]. In-band Network Telemetry (INT) has gained a lot of momentum over the last few years [1]-[5]. The idea behind the INT framework is that each node along a network path adds telemetry items and network state to in-band, data plane traffic. Telemetry items may include switch ID, ingress timestamps, queue occupancy information, and various other performance-related metadata, which are added at line rate as customized headers to in-band, data plane packets. The telemetry items are forwarded to a distributed network monitoring platform, which
Research Interests:
—The demand for mobile communication is continuously increasing, and mobile devices are now the communication device of choice for many people. To guarantee connectivity and performance, mobile devices are typically equipped with multiple... more
—The demand for mobile communication is continuously increasing, and mobile devices are now the communication device of choice for many people. To guarantee connectivity and performance, mobile devices are typically equipped with multiple interfaces. To this end, exploiting multiple available interfaces is also a crucial aspect of the upcoming 5G standard for reducing costs, easing network management, and providing a good user experience. Multi-path protocols, such as Multi-path TCP (MPTCP), can be used to provide performance opti-misation through load-balancing and resilience to coverage drops and link failures, however, they do not automatically guarantee better performance. For instance, low-latency communication has been proven hard to achieve when a device has network interfaces with asymmetric capacity and delay (e.g., LTE and WLAN). For multi-path communication, the data scheduler is vital to provide low latency, since it decides over which network interface to send individual data segments. In this paper, we focus on the MPTCP scheduler with the goal of providing a good user experience for latency-sensitive applications when interface quality is asymmetric. After an initial assessment of existing scheduling algorithms, we present two novel scheduling techniques: the Block Estimation (BLEST) scheduler and the Shortest Transmission Time First (STTF) scheduler. BLEST and STTF are compared to existing schedulers in both emulated and real-world environments and are shown to reduce web object transmission times with up to 51% and provide 45% faster communication for interactive applications, compared to MPTCP's default scheduler.
Research Interests: