Ns3 Projects for B.E/B.Tech M.E/M.Tech PhD Scholars.  Phone-Number:9790238391   E-mail: ns3simulation@gmail.com

Willow: Saving Data Center Network Energy for Network-Limited Flows

Today’s giant data centers are power hungry. Data center energy saving not only helps control the operational cost, but also benefits the sustainable growth of cloud services. Due to the adoption of much more switches in modern data centers as well as the mature server-side power management techniques, energy saving for the data center network is becoming increasingly important. Most previous works on saving data center network energy focus on aggregating flows to as few switches as possible. However, in this paper we argue that this method may not work for network-limited flows, the throughputs of which are elastic based on the competing flows.

To save the network energy consumed by this kind of elastic flows, we propose a flow scheduling approach called Willow, which takes both the number of switches involved and their active working durations into consideration. We formulate this problem by programming and design a greedy approximate algorithm to schedule flows in an online manner. Simulations based on MapReduce traces show that Willow can save up to 60 percent network energy compared with ECMP scheduling in typical settings, and outperforms other classical heuristic algorithms such as simulated annealing and particle swarm optimization. Testbed Experiments demonstrate that this kind of dynamic energy-efficient flow scheduling causes negligible impact on upper-layer applications.

CUTBUF: Buffer Management and Router Design for Traffic Mixing in VNET-based NoCs

Router’s buffer design and management strongly influence energy, area and performance of on-chip networks, hence it is crucial to encompass all of these aspects in the design process. At the same time, the NoC design cannot disregard preventing network-level and protocol-level deadlocks by devoting ad-hoc buffer resources to that purpose. In Chip Multiprocessor Systems (CMPs) the coherence protocol usually requires different virtual networks (VNETs) to avoid deadlocks. Moreover, VNET utilization is highly unbalanced and there is no way to share buffers between them due to the need to isolate different traffic types. This paper proposes CUTBUF, a novel NoC router architecture to dynamically assign VCs to VNETs depending on the actual VNETs load to significantly reduce the number of physical buffers in routers, thus saving area and power without decreasing NoC performance.

Moreover, CUTBUF allows to reuse the same buffer for different traffic types while ensuring that the optimized NoC is deadlock-free both at network and protocol level. In this perspective, all the VCs are considered spare queues not statically assigned to a specific VNET and the coherence protocol only imposes a minimum number of queues to be implemented. Synthetic applications as well as real benchmarks have been used to validate CUTBUF, considering architectures ranging from 16 up to 48 cores. Moreover, a complete RTL router has been designed to explore area and power overheads. Results highlight how CUTBUF can reduce router buffers up to 33% with 2% of performance degradation, a 5% of operating frequency decrease and area and power saving up to 30.6% and 30.7%, respectively. Conversely, the flexibility of the proposed architecture improves by 23.8% the performance of the baseline NoC router when the same number of buffers is used.

Human Mobility Enhances Global Positioning Accuracy for Mobile Phone Localization

Global positioning system (GPS) has enabled a number of geographical applications over many years. Quite a lot of location-based services, however, still suffer from considerable positioning errors of GPS (usually 1 to 20 m in practice). In this study, we design and implement a high-accuracy global positioning solution based on GPS and human mobility captured by mobile phones. Our key observation is that smartphone-enabled dead reckoning supports accurate but local coordinates of users’ trajectories, while GPS provides global but inconsistent coordinates. Considering them simultaneously, we devise techniques to refine the global positioning results by fitting the global positions to the structure of locally measured ones, so the refined positioning results are more likely to elicit the ground truth.

We develop a prototype system, named GloCal, and conduct comprehensive experiments in both crowded urban and spacious suburban areas. The evaluation results show that GloCal can achieve 30 percent improvement on average error with respect to GPS. GloCal uses merely mobile phones and requires no infrastructure or additional reference information. As an effective and light-weight augmentation to global positioning, GloCal holds promise in real-world feasibility.

Graphine: Programming Graph-Parallel Computation of Large Natural Graphs on Multicore Cluster

Graph-parallel computation has become a crucial component in emerging applications of web search, data analytics and machine learning. In practice, most graphs derived from real-world phenomena are very large and scale-free. Unfortunately, distributed graph-parallel computation of these natural graphs still suffers strong scalability issues on contemporary multicore clusters. To embrace the multicore architecture in distributed graph-parallel computation, we propose the framework Graphine, which features (i) A Scatter-Combine computation abstraction that is evolved from the traditional vertex-centric approach by fusing the paired scatter and gather operations, executed separately on two edge sides, into a one-sided scatter.

Further coupled with active message mechanism, it potentially reduces intermediate message cost and enables fine-grained parallelism on multicore architecture. (ii) An Agent-Graph data model, which leverages an idea similar to vertex-cut but conceptually splits the remote replica into two agent types of scatter and combiner, resulting in less communication. We implement the Graphine framework and evaluate it using several representative algorithms on six large real-world graphs and a series of synthetic graphs with powerlaw degree distributions. We show that Graphine achieves sublinear scalability with the number of cores per node, number of nodes, and graph sizes (up to one billion vertices), and is 2???15 times faster than the state-of-the-art PowerGraph on a cluster of sixteen multicore nodes.

Multicent: A Multifunctional Incentive Scheme Adaptive to Diverse Performance Objectives for DTN Routing

In Delay Tolerant Networks (DTNs), nodes meet opportunistically and exchange packets only when they meet with each other. Therefore, routing is usually conducted in a store-carry-forward manner to exploit the scarce communication opportunities. As a result, different packet routing strategies, i.e., which packet to be forwarded or stored with priority, can lead to different routing performance objectives, such as minimal average delay and maximal hit rate. On the other hand, incentive systemsare necessary for DTNs since nodes may be selfish and may not be cooperative on packet forwarding/storage. However, current incentive systems for DTNs mainly focus on encouraging nodes to participate in packet forwarding/storage but fail to further encourage nodes to follow a certain packet routing strategy to realize a routing performance objective.

We name the former as the first aspect of cooperation and the latter as the second aspect of cooperation in DTN routing. Therefore, in this paper, we first discuss the routing strategy that can realize different performance objectives when nodes are fully cooperative, i.e., are willing to follow both aspects of cooperation. We then propose Multicent, a game theoretical incentive scheme that can encourage nodes to follow the two aspects of cooperation even when they are selfish. Basically, Multicent assigns credits for packet forwarding/storage in proportional to the priorities specified in the routing strategy. Multicent also supports adjustable Quality of Service (QoS) for packet routing between specific sources and destinations. Extensive trace-driven experimental results verify the effectiveness of Multicent.

Performance evaluation of ADV DSR & GOD in VANET for city & highway scenario

VANET has grasp the attention of various researchers in this field due to its wide range of applications in different fields i.e comfort, safety and entertainment. It is very expensive to test every networkprotocol or any network algorithm in real network by connecting a number of routers, computers and data links.

Thus, in this paper QOS parameters like throughput, packet drop and collision rate for different routing protocols performed on simulator to prevent from damage or unpredictable result and without spending money.

Analysis of the Scalability and Stability of an ACO Based Routing Protocol for Wireless Sensor Networks

Wireless Sensor Networks (WSNs) are often deployed in remote and hostile areas and because of their limited power and vulnerability, the sensors may stop functioning after sometime leading to the appearance of holes in a network. A hole created by the non-functioning sensors in turn severs the connection between one side and the other side of the network and alternative routes need to be found for the network traffic. Prior research tackled the holes problem only when packets reach some nodes near the hole. In this case, the feedback packets are generated and accordingly the data packets need to be rerouted to avoid the holes.

The traffic overhead for rerouting consumes additional battery power and thus increases the communication cost as well as reducing the lifetime of the sensors. To deal with the dynamical changes in network topologies in an autonomous manner, ant colony optimization (ACO) algorithms have shown very good performance in routing the network traffic. In this paper, we analyze the scalability and stability of the ACO-based routing protocol BIOSARP against the issues caused by holes in WSNs. Network simulator 2 (ns-2) is utilized to perform the analysis. Findings clearly demonstrate that BIOSARP can efficiently maintain the data packet routing over a WSN prior to any possible holes problems, by switching data forwarding to the most optimal neighboring node.

A cross validation of network system models for delay tolerant networks

This paper presents a cross validation of network system models in two simulation tools, namely the ONE Simulator and Scenargie, for the simulation of Delay or Disruption Tolerant Networks (DTNs). The study compares the simulation results from three network system models provided by the twosimulators as well as their efficiency for a commonly used DTN scenario. The results show a fundamental problem inherent to the time stepped approach that may introduce an unacceptable level of inaccuracies in the predicted DTN performance.

Also, limiting such inaccuracies gives the time stepped approach no runtime performance superiority over the event driven approach. Further, the study shows that the DTN performance is highly sensitive to the link establishment overheads, which may change the predicted end-to-end message delivery latencies by an order of magnitude. Moreover, other details of the network system are important when the target network is expected to be highly loaded with communication traffic.

A calibration based thermal modeling technique for complex multicore systems

A calibration based method to construct fast and accurate thermal models of the state-of-the-art multicore systems is presented. Such models are usually required during Design Space Exploration (DSE) exercises to evaluate various task-to-core mapping, associated scheduling and processor speed-scaling options for their overall impact on the system temperature. Current approaches require modeling the thermal characteristics of the target processor using numerical simulators, which assume accurate information about several critical parameters (e.g., the processor floorplan). Such parameters are not readily available, forcing the system designers to use time and cost intensive, and possibly error-prone techniques such as using heat maps for reverse-engineering such parameters.

Additionally, advanced power and temperature management algorithms commonly found in the state-of-the-art processors must also be accurately modeled. This paper proposes a calibration based method for constructing the complete system thermal model of a target processor without requiring any hard-to-get information such as the detailed processor floorplan or system power traces. Taking an example of a sufficiently complex Intel Xeon 8-core processor, we show that our approach yields an accurate thermal model, which is also lightweight both in terms of memory and compute requirements to be practically feasible for DSE over current processors.

A new RA-DA hybrid MAC approach for DVB-RCS2

This paper proposes a new MAC scheme for DVB-RCS2 aimed to efficiently address M2M/SCADA traffic. The proposed scheme relies on the idea to complement Random Access (RA) schemes with Dedicated Access (DA) schemes, when traffic spikes feed the network. The rationale is to control the offered load on the RA slots pool to keep the packet loss, due to collisions, under a pre-defined threshold.

New MAC algorithms are based on the interaction between NCC and STs and add a number of new features and dynamics, such as selective switching on a full DA configuration for greedy STs. A prototype implementation has been developed on the Network Simulator NS-2 to verify the effectiveness of the proposed approach in a preliminary study case. Achieved results are promising and allow highlighting the main MAC scheme dynamics, leading to practical considerations and open issues to be addressed in future activities.