Skip to main content
Ali Tizghadam

    Ali Tizghadam

    In the new digital age, the pace and volume of growing transportation related data is exceeding our ability to manage and analyze it. In this position paper, we present a data engine, Godzilla, to ingest real-time traffic data and support... more
    In the new digital age, the pace and volume of growing transportation related data is exceeding our ability to manage and analyze it. In this position paper, we present a data engine, Godzilla, to ingest real-time traffic data and support analytic and data mining over traffic data. Godzilla is a multi-cluster approach to handle large volumes of growing data, changing workloads and varying number of users. The data originates at multiple sources, and consists of multiple types. Meanwhile, the workloads belong to three camps, namely batch processing, interactive queries and graph analysis. Godzilla support multiple language abstractions from scripting to SQL-like language.
    ABSTRACT Due to the time-varying nature of wireless networks, it is required to find robust optimal methods to control the behavior and performance of such networks; however, this is a challenging task since robustness metrics and... more
    ABSTRACT Due to the time-varying nature of wireless networks, it is required to find robust optimal methods to control the behavior and performance of such networks; however, this is a challenging task since robustness metrics and QoS-based (Quality of service) constraints in a wireless environment are typically highly non-linear and non-convex. This paper explores the possibility of using graph theoretic metrics to provide robustness in a wireless network at the presence of a set of QoS constraints. In particular, we are interested in robust planning of a wireless network for a given demand matrix while preserving end-to-end delay for input demands below a given threshold set. To this end, we show that the upper bound of end-to-end round trip time between two nodes of a network can be approximated by point-to-point network criticality (or resistance distance) of the network. We construct a convex optimization problem to provide a delay-guaranteed jointly optimal allocation of transmit powers and link flows. We show that the solution provides a robust behavior, i.e. it is insensitive to the environmental changes such as wireless link disruption, this is expected because network criticality is a robustness metric. Our framework can be applied to a wide range of SINR (Signal to Interference plus Noise Ratio) values.
    This paper reports on a probabilistic method for traffic engineering (specifically routing and resource allocation) in backbone networks, where the transport is the main service and robustness to the unexpected changes in network... more
    This paper reports on a probabilistic method for traffic engineering (specifically routing and resource allocation) in backbone networks, where the transport is the main service and robustness to the unexpected changes in network parameters is required. We analyze the network using the probabilistic betweenness of the network nodes (or links). The theoretical results lead to the definition of "criticality" for
    We begin by considering the grand challenge to design application platforms that will enable the smart city. We identify two fundamental research challenges: 1. Cross-infrastructure Management Systems to coordinate resource use in a city;... more
    We begin by considering the grand challenge to design application platforms that will enable the smart city. We identify two fundamental research challenges: 1. Cross-infrastructure Management Systems to coordinate resource use in a city; 2. Services Platforms that promote collective and individual intelligence. Next we report research progress in the design of application platforms that can support the smart applications required in the smart city. In particular, we describe the SAVI (Smart Applications on Virtual Infrastructure) application platform testbed for research in smart applications and its design using software-defined infrastructure. We present the CVST (Connected Vehicles and Smart Transportation) Platform for Intelligent Transportation Services in the Greater Toronto area, which runs on the SAVI testbed. Its design provides the intelligence to enable smart management in smart cities. We end with a discussion of general principles for the design of smart infrastructures.
    ABSTRACT Network criticality measures the robustness of a network to changes in topology, traffic demand, and faults. Recent research has shown that path selection according to network criticality metrics can lead to improved utilization... more
    ABSTRACT Network criticality measures the robustness of a network to changes in topology, traffic demand, and faults. Recent research has shown that path selection according to network criticality metrics can lead to improved utilization and reduced blocking in mesh networks. In this paper we investigate the selection of survivable routes within the context of dynamic routing using weighted random-walk path criticality routing (WRW-PCR). To build backup paths for primary routes a shared backup path selection strategy is considered. Each link is characterized by its active bandwidth, backup bandwidth, and available capacity. The WRW-PCR algorithm is used to find paths with less sensitivity to traffic and topology changes. We present simulation results that demonstrate that path criticality routing results in much lower blocking than alternative routing algorithms.
    In the new digital age, the pace and volume of growing transportation related data is exceeding our ability to manage and analyze it. In this position paper, we present a data engine, Godzilla, to ingest real-time traffic data and support... more
    In the new digital age, the pace and volume of growing transportation related data is exceeding our ability to manage and analyze it. In this position paper, we present a data engine, Godzilla, to ingest real-time traffic data and support analytic and data mining over traffic data. Godzilla is a multi-cluster approach to handle large volumes of growing data, changing workloads and varying number of users. The data originates at multiple sources, and consists of multiple types. Meanwhile, the workloads belong to three camps, namely batch processing, interactive queries and graph analysis. Godzilla support multiple language abstractions from scripting to SQL-like language.
    Auto-scalability has become an evident feature for cloud software systems including but not limited to big data and IoT applications. Cloud application providers now are in full control over their applications' microservices and... more
    Auto-scalability has become an evident feature for cloud software systems including but not limited to big data and IoT applications. Cloud application providers now are in full control over their applications' microservices and macroservices; virtual machines and containers can be provisioned or deprovisioned on demand at runtime. Elascale strives to adjust both micro/macro resources with respect to workload and changes in the internal state of the whole application stack. Elascale leverages Elasticsearch stack for collection, analysis and storage of performance metrics. Elascale then uses its default scaling engine to elastically adapt the managed application. Extendibility is guaranteed through provider, schema, plug-in and policy elements in the Elascale by which flexible scalability algorithms, including both reactive and proactive techniques, can be designed and implemented for various technologies, infrastructures and software stacks. In this paper, we present the archite...
    Malicious data manipulation reduces the effectiveness of machine learning techniques, which rely on accurate knowledge of the input data. Motivated by real-world applications in network flow classification, we address the problem of... more
    Malicious data manipulation reduces the effectiveness of machine learning techniques, which rely on accurate knowledge of the input data. Motivated by real-world applications in network flow classification, we address the problem of robust online learning with delayed feedback in the presence of malicious data generators that attempt to gain favorable classification outcome by manipulating the data features. We propose online algorithms termed ROLC-NC and ROLC-C when the malicious data generators are non-clairvoyant and clairvoyant, respectively. We derive regret bounds for both algorithms and show that they are sub-linear under mild conditions. We further evaluate the proposed algorithms in network flow classification via extensive experiments using real-world data traces. Our experimental results demonstrate that both algorithms can approach the performance of an optimal static offline classifier that is not under attack, while outperforming the same offline classifier when tested with a mixture of normal and manipulated data.
    We consider the minimization of electricity cost of a core network where each node has access to solar renewable energy and energy storage in a time-of-use pricing environment. Using an optimization based approach we demonstrate that... more
    We consider the minimization of electricity cost of a core network where each node has access to solar renewable energy and energy storage in a time-of-use pricing environment. Using an optimization based approach we demonstrate that expenditure on electricity can be reduced by 60% through an effective energy management policy. We also present a distributed, greedy energy management algorithm, which makes hourly electricity purchase and energy storage decisions at each of the nodes in the network and performs close to the optimal case. Finally, we measure the impact of parameters including solar panel size, energy storage capacity and storage charging rate as well as seasonal variations of solar energy on the service provider's expenditure on electricity.
    — Peer-to-Peer (P2P) systems have witnessed an explo-sive growth in popularity due to their desirable characteristics (robustness, scalability, availability). In this paper, we present an approach to bring these characteristics into the... more
    — Peer-to-Peer (P2P) systems have witnessed an explo-sive growth in popularity due to their desirable characteristics (robustness, scalability, availability). In this paper, we present an approach to bring these characteristics into the control plane of IP networks, which mainly relies on signaling protocols such as SIP to setup multimedia and instant messaging sessions. We present a structured P2P control plane based on modifications to the original Chord P2P topology, resulting in a hierarchical over-lay of SIP peers that replaces traditional client-server paradigms in control plane signaling protocols. Implementations were used to study the performance of the proposed structured P2P control plane, and its suitability for use in IP networks. I.
    Abstract—This paper reports on a probabilistic method for traffic engineering (specifically routing and resource allocation) in backbone networks, where the transport is the main service and robustness to the unexpected changes in network... more
    Abstract—This paper reports on a probabilistic method for traffic engineering (specifically routing and resource allocation) in backbone networks, where the transport is the main service and robustness to the unexpected changes in network parameters is required. We analyze the network using the probabilistic betweenness of the network nodes (or links). The theoretical results lead to the definition of ”criticality ” for nodes and links. Link criticality is used as the main metric to model the risk of taking a specific path from a source to a destination node. Different paths will be ranked based on their criticality measure, and the best path will be selected to route the flow along the core network. The choice of the path is in the direction of preserving the robustness of the network to the unforeseen changes in topology and traffic demands. The proposed method is useful in situations like MPLS and Ethernet networks where path assignment is required.
    Abstract—This paper reconsiders the problem of robust net-work design form a different point of view using the concept of resistance distance from network science. It has been shown that some important network performance metrics, such as... more
    Abstract—This paper reconsiders the problem of robust net-work design form a different point of view using the concept of resistance distance from network science. It has been shown that some important network performance metrics, such as average utilization in a communication network or total power dissipation in an electrical grid, can be expressed in terms of linear combination of point-to-point resistance distances of a graph. In this paper we choose to have a weighted linear combination of resistance distances, referred to as weighted network criticality (WNC), as the objective and we investigate the vulnerability of different network types. In particular, We formulate a min-max convex optimization problem to design k-robust networks and we provide extension to account for joint optimization of resources and flows. We study the solution of the optimization problem in two different networks. First we consider RocketFuel topologies and Abilene as representatives for service provi...
    K-robust network design using resistance distance: Case of RocketFuel and power grids
    The conventional approaches to routing and bandwidth allocation, the two major components of traffic engineering, have proved insufficient to address QoS requirements of flows while optimizing utilization for complex communication... more
    The conventional approaches to routing and bandwidth allocation, the two major components of traffic engineering, have proved insufficient to address QoS requirements of flows while optimizing utilization for complex communication networks. In this paper we consider ant colony algorithms to address this problem. Our studies show that the ant-based routing models are sensitive to initial parameters settings. Only careful adjustments of these initial parameters results in an acceptable convergence behavior. The robust behavior of the real ant compared to the routing algorithms derived from it inspires us to investigate the reasons behind the shortcomings of these algorithms. We present results from an in-depth study of ant behavior in a quest for a robust algorithm. In this work we consider a realistic environment in which multiple source-destination flows compete for resources. We study the routing and load balancing behavior that emerges and show how the behavior relates to analytic...
    Motivated by interest in providing more efficient services in customer service systems, we use statistical learning methods and delay history information to predict the conditional distribution of the customers' waiting times in... more
    Motivated by interest in providing more efficient services in customer service systems, we use statistical learning methods and delay history information to predict the conditional distribution of the customers' waiting times in queueing systems. From the predicted distributions, descriptive statistics of the system such as the mean, variance and percentiles of the waiting times can be obtained, which can be used for delay announcements, SLA conformance and better system management. We model the conditional distributions by mixtures of Gaussians, parameters of which can be estimated using Mixture Density Networks. The evaluations show that exploiting more delay history information can result in much more accurate predictions under realistic time-varying arrival assumptions.
    Exponential traffic growth due to the increasing popularity of Over-The-Top Video services has put service providers under much pressure. By promoting in-network caching, Information-Centric Networking (ICN) is a promising paradigm to... more
    Exponential traffic growth due to the increasing popularity of Over-The-Top Video services has put service providers under much pressure. By promoting in-network caching, Information-Centric Networking (ICN) is a promising paradigm to answer current challenges in the service provider's domain. This paper reports on a cache placement strategy for service providers to delay the onset of congestion (time-to-exhaustion) to the extent possible in order to optimize their capital expenditure for their limited capacity planning budget. We show that even a limited deployment of ICN provides a substantial increase in the time-to-exhaustion of the network and a decrease in the number of links with high utilization.
    Network traffic classification using machine learning techniques has been widely studied. Most existing schemes classify entire traffic flows, but there are major limitations to their practicality. At a network router, the packets need to... more
    Network traffic classification using machine learning techniques has been widely studied. Most existing schemes classify entire traffic flows, but there are major limitations to their practicality. At a network router, the packets need to be processed with minimum delay, so the classifier cannot wait until the end of the flow to make a decision. Furthermore, a complicated machine learning algorithm can be too computationally expensive to implement inside the router. In this paper, we introduce flow-packet hybrid traffic classification (FPHTC), where the router makes a decision per packet based on a routing policy that is designed through transferring the learned knowledge from a flow-based classifier residing outside the router. We analyze the generalization bound of FPHTC and show its advantage over regular packet-based traffic classification. We present experimental results using a real-world traffic dataset to illustrate the classification performance of FPHTC. We show that it is...
    Recent studies have demonstrated that machine learning can be useful for application-oriented network traffic classification. However, a network operator may not be able to infer the application of a traffic flow due to the frequent... more
    Recent studies have demonstrated that machine learning can be useful for application-oriented network traffic classification. However, a network operator may not be able to infer the application of a traffic flow due to the frequent appearance of new applications or due to privacy and other constraints set by regulatory bodies. In this work, we consider traffic flow classification based on the class of service (CoS), using delay sensitivity as an example in this preliminary study. Our focus is on direct CoS classification without first inferring the application. Our experiments with real-world encrypted TCP flows show that this direct approach can be substantially more accurate than a two-step approach that first classifies the flows based on their applications. However, without invoking application labels, the direct approach is more opaque than the two-step approach. Therefore, to provide human understandable interpretation of the trained learning model, we further propose an expl...
    Abstract—This paper reports on an autonomic network man-agement architecture based on the concept of ”evolution”. A management methodology is developed which is relying on the ideas from evolutionary science, virtual networks, and... more
    Abstract—This paper reports on an autonomic network man-agement architecture based on the concept of ”evolution”. A management methodology is developed which is relying on the ideas from evolutionary science, virtual networks, and autonomic networking. We argue that any communication network could be modeled as an evolved topology based on survivability and performance requirements. The evolution is in the direction of decreasing the chance of congestion and increasing the network robustness. We describe the architecture of our network man-agement system in detail and tie it to the theory of evolution. We evaluate the ”betweenness centrality ” of network topologies and build our robust routing algorithm to manage the transport of packets in the network based on it. This routing scheme is at the heart of out proposed network management system.
    Network criticality is a graph-theoretic metric that quantifies network robustness, and that was originally designed to capture the effect of environmental changes in core networks. This paper investigates the application of network... more
    Network criticality is a graph-theoretic metric that quantifies network robustness, and that was originally designed to capture the effect of environmental changes in core networks. This paper investigates the application of network criticality in designing robust power allocation and flow assignment algorithms for wireless networks. Achieving robust behavior in wireless networks is a challenging task due to constant changes in channel conditions and the interference. We consider network criticality as a natural robustness metric, and propose approaches to preserve the useful convexity properties of network criticality, while resolving issues related to the non-convexity of Shannon’s capacity.
    Network flow classification is essential to proper provisioning of Quality of Service (QoS). Conventional machine-learning based flow classification methods assume reliable knowledge of the flow features. However, in practice, malicious... more
    Network flow classification is essential to proper provisioning of Quality of Service (QoS). Conventional machine-learning based flow classification methods assume reliable knowledge of the flow features. However, in practice, malicious flow generators can manipulate the flow features to increase the likelihood of certain learning outcomes, e.g., in terms of the QoS requirement label. Training a classifier that is robust to such feature manipulation is imperative. In this work, we present a study on robust flow classification against malicious feature manipulation. We leverage a detailed system model to capture the relation between the classifier and malicious flow generators and propose a Stackelberggame based solution framework to train a robust classifier. We conduct extensive experimentation using real-world traces. For flows with manipulated features, the Stackelberg classifier trained by our solution framework significantly outperforms a non-robust classifier that is oblivious...
    This paper reports on a probabilistic method for traffic engineering (specifically routing and resource allocation) in backbone networks, where the transport is the main service and robustness to the unexpected changes in network... more
    This paper reports on a probabilistic method for traffic engineering (specifically routing and resource allocation) in backbone networks, where the transport is the main service and robustness to the unexpected changes in network parameters is required. We analyze the network using the probabilistic betweenness of the network nodes (or links). The theoretical results lead to the definition of "criticality" for nodes and links. Link criticality is used as the main metric to model the risk of taking a specific path from a source to a destination node. Different paths will be ranked based on their criticality measure, and the best path will be selected to route the flow along the core network. The choice of the path is in the direction of preserving the robustness of the network to the unforeseen changes in topology and traffic demands. The proposed method is useful in situations like MPLS and Ethernet networks where path assignment is required. Index Terms—Robustness, Graph-...
    A large body of network-related problems can be formulated or explained by Moore-Penrose inverse of the graph Laplacian matrix of the network. This paper studies the impact of overlaying or removing a subgraph (inserting / removing a... more
    A large body of network-related problems can be formulated or explained by Moore-Penrose inverse of the graph Laplacian matrix of the network. This paper studies the impact of overlaying or removing a subgraph (inserting / removing a group of links, or modifying a set of link weights) on MoorePenrose inverse of the Laplacian matrix of an existing network topology. Moreover, an iterative method is proposed to find point-to-point resistance distance and network criticality of a graph as a key performance measure to study the robustness of a network when we have link insertion and/or link removal.
    This paper presents CVST, an open scalable platform for smart transportation application development. CVST resources can be elastically scaled up/down, or scaled out to robustly adjust to the varying demands on CVST portal. CVST provides... more
    This paper presents CVST, an open scalable platform for smart transportation application development. CVST resources can be elastically scaled up/down, or scaled out to robustly adjust to the varying demands on CVST portal. CVST provides APIs to access all live and historical transportation data as well as analytics and algorithmic engines that are provisioned within the platform. Third party application developers and researchers can create their own space in the CVST cloud environment and build their applications.
    This paper looks at the problem of traffic engineering and network control from a new perspective. A graph-theoretical metric, betweenness, in combination with a network weight matrix is used to characterize the robustness of a network.... more
    This paper looks at the problem of traffic engineering and network control from a new perspective. A graph-theoretical metric, betweenness, in combination with a network weight matrix is used to characterize the robustness of a network. Theoretical results lead to a definition of ”criticality” for nodes and links. It is shown that this quantity is a global network quantity and depends on the weight matrix of the graph. Strict convexity of network criticality is proved and an optimization problem is solved to minimize the network criticality as a function of weight matrix which in turn provides maximum robustness. Investigation of the condition of optimality suggests directions to design appropriate control laws and traffic engineering methods to robustly assign traffic flows. The choice of the path for routing the flow in these traffic engineering methods is in the direction of preserving the robustness of the network to the unforeseen changes in topology and traffic demands. The pr...
    End-to-end delay is a critical attribute of quality of service (QoS) in application domains such as cloud computing and computer networks. This metric is particularly important in tandem service systems, where the end-to-end service is... more
    End-to-end delay is a critical attribute of quality of service (QoS) in application domains such as cloud computing and computer networks. This metric is particularly important in tandem service systems, where the end-to-end service is provided through a chain of services. Service-rate control is a common mechanism for providing QoS guarantees in service systems. In this paper, we introduce a reinforcement learningbased (RL-based) service-rate controller that provides probabilistic upper-bounds on the end-to-end delay of the system, while preventing the overuse of service resources. In order to have a general framework, we use queueing theory to model the service systems. However, we adopt an RL-based approach to avoid the limitations of queueing-theoretic methods. In particular, we use Deep Deterministic Policy Gradient (DDPG) to learn the service rates (action) as a function of the queue lengths (state) in tandem service systems. In contrast to existing RL-based methods that quant...
    Ensuring the conformance of a service system’s end-to-end delay to service level agreement (SLA) constraints is a challenging task that requires statistical measures beyond the average delay. In this paper, we study the real-time... more
    Ensuring the conformance of a service system’s end-to-end delay to service level agreement (SLA) constraints is a challenging task that requires statistical measures beyond the average delay. In this paper, we study the real-time prediction of the end-to-end delay distribution in systems with composite services such as service function chains. In order to have a general framework, we use queueing theory to model service systems, while also adopting a statistical learning approach to avoid the limitations of queueing-theoretic methods such as stationarity assumptions or other approximations that are often used to make the analysis mathematically tractable. Specifically, we use deep mixture density networks (MDN) to predict the end-to-end distribution of the delay given the network’s state. As a result, our method is sufficiently general to be applied in different contexts and applications. Our evaluations show a good match between the learned distributions and the simulations, which ...

    And 49 more