To calculate the time delay (latency) in ns3, we need to monitor the time which a packet is sent and received at the destination node. This is also known as “end-to-end delay”. Here is the detailed illustrative to implement the simulation for time delay in ns3:
Steps to Calculate Time Delay
- Set up the Simulation: By generating the network topology and configure the links to the nodes.
- Install Applications: To send and receive packets by use of UDP client and server application.
- Record Packet Timestamps: When packets are sent and received, use the packet counterfeit to monitor the timestamps.
- Calculate Delay: Calculate the delay by finding the difference between the send and receive timestamps for each packet.
Example Implementation
The given below is the sample configuration for basic point-to-point network in ns3 and is to evaluate the end-to-end delay of packets:
Setting Up the Network Simulation
#include “ns3/core-module.h”
#include “ns3/network-module.h”
#include “ns3/internet-module.h”
#include “ns3/point-to-point-module.h”
#include “ns3/applications-module.h”
#include “ns3/packet.h”
#include <iostream>
using namespace ns3;
NS_LOG_COMPONENT_DEFINE(“TimeDelayExample”);
class TimeTag : public Tag
{
public:
TimeTag () {}
TimeTag (Time time) : m_time (time) {}
static TypeId GetTypeId (void)
{
static TypeId tid = TypeId (“ns3::TimeTag”)
.SetParent<Tag> ()
.AddConstructor<TimeTag> ();
return tid;
}
virtual TypeId GetInstanceTypeId (void) const
{
return GetTypeId ();
}
virtual void Serialize (TagBuffer i) const
{
int64_t time = m_time.GetNanoSeconds ();
i.Write ((const uint8_t *)&time, sizeof (time));
}
virtual void Deserialize (TagBuffer i)
{
int64_t time;
i.Read ((uint8_t *)&time, sizeof (time));
m_time = NanoSeconds (time);
}
virtual uint32_t GetSerializedSize (void) const
{
return sizeof (int64_t);
}
virtual void Print (std::ostream &os) const
{
os << “t=” << m_time;
}
void SetTime (Time time)
{
m_time = time;
}
Time GetTime (void) const
{
return m_time;
}
private:
Time m_time;
};
void
PacketSentCallback (Ptr<const Packet> packet)
{
TimeTag tag (Simulator::Now ());
const_cast<Packet *>(packet)->AddByteTag (tag);
}
void
PacketReceivedCallback (Ptr<const Packet> packet)
{
TimeTag tag;
bool found = packet->FindFirstMatchingByteTag (tag);
if (found)
{
Time sendTime = tag.GetTime ();
Time receiveTime = Simulator::Now ();
Time delay = receiveTime – sendTime;
std::cout << “Packet delay: ” << delay.GetMicroSeconds () << ” microseconds” << std::endl;
}
}
int main (int argc, char *argv[])
{
Time::SetResolution (Time::NS);
LogComponentEnable (“TimeDelayExample”, LOG_LEVEL_INFO);
// Create nodes
NodeContainer nodes;
nodes.Create (2);
// Set up point-to-point link
PointToPointHelper pointToPoint;
pointToPoint.SetDeviceAttribute (“DataRate”, StringValue (“5Mbps”));
pointToPoint.SetChannelAttribute (“Delay”, StringValue (“2ms”));
NetDeviceContainer devices;
devices = pointToPoint.Install (nodes);
// Install the Internet stack
InternetStackHelper stack;
stack.Install (nodes);
Ipv4AddressHelper address;
address.SetBase (“10.1.1.0”, “255.255.255.0”);
Ipv4InterfaceContainer interfaces = address.Assign (devices);
// Set up UDP server on Node 1
uint16_t port = 9;
UdpServerHelper server (port);
ApplicationContainer serverApp = server.Install (nodes.Get (1));
serverApp.Start (Seconds (1.0));
serverApp.Stop (Seconds (10.0));
// Set up UDP client on Node 0
UdpClientHelper client (interfaces.GetAddress (1), port);
client.SetAttribute (“MaxPackets”, UintegerValue (320));
client.SetAttribute (“Interval”, TimeValue (MilliSeconds (50)));
client.SetAttribute (“PacketSize”, UintegerValue (1024));
ApplicationContainer clientApp = client.Install (nodes.Get (0));
clientApp.Start (Seconds (2.0));
clientApp.Stop (Seconds (10.0));
// Connect packet sent and received callbacks
devices.Get (0)->TraceConnectWithoutContext (“PhyTxEnd”, MakeCallback (&PacketSentCallback));
devices.Get (1)->TraceConnectWithoutContext (“PhyRxEnd”, MakeCallback (&PacketReceivedCallback));
Simulator::Run ();
Simulator::Destroy ();
return 0;
}
Explanation:
The given below is process description for time delay:
- TimeTag Class:
- A custom Tag class TimeTag is defined to store the timestamp when the packet is sent.
- This tag is serialized and deserialized with the packet.
- Simulation Setup:
- Two nodes are created and connected using a point-to-point link.
- The link is configured with a specific data rate and delay.
- Application Setup:
- A UDP server is installed on the destination node (Node 1).
- A UDP client is installed on the source node (Node 0) to send packets to the server.
- Packet Timestamps:
- PacketSentCallback: Adds a TimeTag to the packet when it is sent, storing the current simulation time.
- PacketReceivedCallback: Retrieves the TimeTag from the received packet and calculates the delay.
- Running the Simulation:
- The simulation runs for a specified period, during which packets are sent and received.
- The delay for each packet is printed to the console.
Overall, we learned about how the end to end delay will calculate the time among the sent and received packets through ns3simulation. We also elaborate on how the latency is simulated and calculated in other simulation tools.
If you need assistance with project performance and calculating time delay in ns3tool, feel free to reach out to ns3simulation.com. We are here to help you with your networking needs on time delay share your parameters with us .