Storage Allocation Plan for Routing Nodes in Delay Tolerant Network

Storage Allocation Plan for Routing Nodes in Delay Tolerant Network

Ligang Cong |  Huamin Yang* | Xiaoqiang Di  

School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China

Jilin Province Key Laboratory of Network and Information Security, Changchun 130022, China

Information Center, Changchun University of Science and Technology, Changchun 130022, China

Corresponding Author Email: 
yhm@cust.edu.cn
Page: 
89-94
|
DOI: 
https://doi.org/10.18280/isi.240113
Received: 
25 October 2018
|
Revised: 
1 January 2019
|
Accepted: 
7 January 2019
|
Available online: 
20 April 2019
| Citation

OPEN ACCESS

Abstract: 

During data storage and forwarding, the storage resources in the delay tolerant network (DTN) are often allocated unfairly between the routing nodes. To improve the fairness and the routing performance in the DTN, this paper puts forward an optimal storage allocation plan for routing nodes based on the weighted max-min fairness. Unlike the existing storage allocation plan, which adopts the first-come, first-served mode, the weighted max-min fairness principle can allocate data transmission opportunities fairly to the data nodes, while giving more resources to key tasks and reducing service waiting time. The simulation experiment shows that the DTN routing algorithm after the optimization of storage resources achieved better transmission success rate and mean network delay than unoptimized routing algorithm.

Keywords: 

delay tolerant network (DTN), weighted max-min fairness, routing algorithm

1. Introduction

The concept of DTN [1-3] was first proposed by DTNRG (Delay Tolerant Network Research Group) in 2003, in an attempt to solve the problem of frequent interruption and long latency of interstellar network via DTN network. In that year, Kevin Fall delivered the famous paper "A Delay-Tolerant Network Architecture for Challenged Internets" at the SIGCOMM international congress, which served as a classic in the DTN network field. Up to now, the DTN has been widely applied in agricultural networks [4], interstellar network [5], wildlife survey [6-8] and in many other fields. It indeed has an immense application prospect.

Routing relevant technology is a hotspot in the field of DTN. There are classic DTN routing algorithms including Epidemic, PROPHET, Spray and Wait, and MaxProp. In the Epidemic infection mechanism algorithm, each node clones the copy of messages to the nodes it encounters. However, subject to the capacities of nodes and caches, many messages will be retransmitted and discarded, and there are significant resources consumed on the Internet [9]. The PROPHET algorithm, in accordance with the motion law of nodes, the messages can be duplicated to the nodes where it is more likely to complete the data transmission by predicting the probability that these messages can reach the destination nodes, thereby limiting the copies of messages and reducing the consumption of network resources [10]; Spray and Wait algorithm specifies the upper limit n of forwarding messages, thus avoiding such terrible mess that there are too many messages possibly generated that may cause an explosive growth of network overhead [11]. MaxProp routing algorithm introduces the priority to mark data, that is, data with higher priority is sent first, otherwise, it is first removed. In this way, the access rate of resources in the nodes will be greatly improved [12]. Although the above routing algorithms are different from each other, they are inseparable from basic mode of DTN network routing, i.e. "storage-mobility-forward" which uses the nodes in the DTN network as data carriers. When a node requires data transmission, its neighbor nodes can receive data as data carriers based on a certain rule and send them to other nodes until these data reaches the destination node. The above routing algorithms focus on how to select a proper data forwarding node. We can improve the algorithm efficiency by changing the rule of picking out the forwarding nodes. On this basis, the focus of study is on the allocation optimization of storage resources in data forwarding nodes to improve the performance of the routing algorithm.

This paper outspreads the study on the storage allocation plan of DTN routing nodes. We allocate data storage resources in data forwarding nodes based on the weighted max-min fairness principle. While ensuring the resource allocation fairness, the security level of key data is improved. Simulation test shows that: compared with the first-come-first-serve allocation model, the routing algorithm improved by this program has a better response to network transmission success rate and average delay, and significantly improves the DTN network performance.

2. Node Cache Allocation Model Based on Weighted Max-Min Fairness Principle

2.1 Network situation

As shown in Figure 1, for data forwarding point f, there are three nodes a, b, and c that have data forwarding request in the communication coverage of current location. It can be interpreted there are multiple data transmission requests in any one time. Assume that individual node among them has too much data to be sent, but the storage resources in current data forwarding node are insufficient to store all data requested for transmission. In this case, it is required to consider the allocation of storage resources. If the resources in the memory of data forwarding node are simply allocated using the “first come first serve” model, some nodes may have too long latency time, and even it is possible to cause the mess that individual key data will be still unserved after the data forwarding nodes have been recurred many times. Therefore, in order to improve the DTN network performance, it is quite necessary to optimize the allocation of storage resources in data forwarding nodes.

Figure 1. Network model

2.2 Optimization algorithm of storage allocation for data forwarding node

The weighted max-min fairness algorithm [7] is widely applied in network communication fields, such as network routing, traffic distribution, resource scheduling. For the DTN network situation described in the previous section, to solve physical problems occurred therein, we work from optimizing the allocation of storage resources in the data forwarding nodes, and use the weighted max-min fairness principle to optimize the storage allocation program, thereby reducing the average latency time of services and improving the quality of network services.

As shown in Figure 1, when the data forwarding node s is in the current position, there are multiple nodes in communication coverage, which wait this node to provide data forwarding service. Multiple data nodes submit packets requested for service to the data forwarding node, and after parsing, s acquires the total volume of data required to be stored and forwarded. If the total volume of data is less than or equal to storage resources of f, the normal data storage and forwarding services should be provided for all nodes; otherwise, the allocation of memory space should be improved.

In order to improve the model generality, it is assumed that there are n data nodes in the data forwarding coverage, and the set of data nodes, D={d1,……,dn}, the data storage requested for the data nodes in this session, X={x1,……,xn}, where x1<= x1……<=xn; it is also assumed that the remaining storage space in the current data forwarding node f is s, and the communication window time of the data forwarding node with relevant data nodes is long enough to perform the relevant operations.

Assume that the weights of n data nodes in this session are W={w1,……,wn}, respectively. The weight represents the importance of relevant data nodes, and can also be interpreted as the quality of service requirement in relevant sessions. Here we divide the weight into 10 levels represented by 1~10, respectively, where, 1 represents the minimum level and 10 is the maximum level.

Based on the above assumptions, firstly, according to the weights of n nodes, the remaining storage space in the data forwarding node f is allocated in first round, then the memory resource allocation of the data forwarding node can be expressed as B1={b11,……,b1n}, where b1i can be expressed by the Formula (1).

${{b}_{1i}}=\frac{{{w}_{i}}}{\sum\limits_{j=1}^{n}{{{w}_{j}}}}\bullet b$ (1)

After finishing the first round of storage space allocation, compare the data node demand xi to the memory allocation volume bi. If bi is equal to xi, the node ni acquires the storage space bi; if bi is greater than xi, storage space xi is allocated to the node ni. And the excess is regained; if bi is less than xi, the storage space bi is assigned, and the node ni participates in the next round of storage space allocation. The storage resources available for data nodes in first round are shown in Formula (2).

${{m}_{1i}}=\left\{ \begin{matrix}   \begin{matrix}   {{x}_{i}} & \begin{matrix}   ,if & {{x}_{i}}<{{b}_{i}}  \\\end{matrix}  \\\end{matrix}  \\   \begin{matrix}   {{b}_{i}} & ,if & {{x}_{i}}={{b}_{i}}  \\\end{matrix}  \\   \begin{matrix}   {{b}_{i}} & ,if & {{x}_{i}}>{{b}_{i}}  \\\end{matrix}  \\\end{matrix} \right.$ (2)

After the first round of allocation of storage resources, it is assumed that the remaining storage resources in the data forwarding node are s1 at this time, and there are still k nodes that do not acquire all required storage resources, then reallocate the storage resources in the second round according to Formulas (1) and (2). If the second round of allocation of storage resources has not yet been enough to satisfy the demands of all nodes for storage resources, and the remaining storage resources in the data forwarding node are not zero, continue to repeat the above steps for resource allocation until it is ended or demands of them are all met, as shown in Figure 2.

Figure 2. Allocation process of storage resource in data forwarding nodes

3. Algorithm Simulation and Analysis

Here, we change the number of data forwarding nodes, and the storage space of the data forwarding nodes to observe the network transmission success rate and the network average delay, as well as the network state. The weighted max-min storage and first come first serve allocation modes are compared and analyzed for performance.

3.1 Simulation setting

The simulation tool used herein is ONE [13-14] developed by AriKeränen and Jörg Ott at the Aalto University in Helsinki, Finland in Java. It can run on multiple platforms such as Windows and Linux.

As shown in Figure 3, there are three sets of data nodes in the simulation environment, each of which has multiple nodes and randomly moves around the fixed point. Three routes are set up among the three sets of data nodes. Some data forwarding nodes move recurrently according to the routes to provide data storage and forwarding services, and the storage resources are allocated for them using the weighted max-min and the first come first serve algorithms. Other main simulation parameters are shown in Table 1.

Figure 3. Schematic diagram of Simulation network topology

Table 1. Main simulation parameters

Parameter

Value

Description

Communication rate

250

250kbps

Node communication radius

10

Node communication radius is 10m

Node movement model

ClusterMovement、MapRouteMovement

The data node is ClusterMovement;

Node movement speed

3,5

The data node is MapRouterMovement

Data generation interval

8,12

The movement is at 3-5 meters/s

Number of data nodes

120

A packet generates every 8 ~12 s

Data survival time

60

Number of data nodes required for service in the network

Scene size

4500,3400

Packet survival time is 60 min

Simulation duration

43200

The simulation scene is 2000m*2000m

The simulation duration is 12 h

 
3.2 Analysis of simulation results

Here, by modifying the sizes of storage resources and packets and the numbers of data forwarding nodes and data nodes, the performance of the weighted max-min and first-come first-serve storage resource allocation modes are compared and analyzed under different conditions.

3.2.1 Relationship between storage space in data forwarding nodes and the performance storage allocation mode

This section discusses the relationship between the storage resource allocation mode for data forwarding nodes and the performance of routing algorithms. Based on the simulation scenario in Table 1, the number of data forwarding nodes is set to 5. By modifying the size of the storage resources in the data forwarding node, we analyze how it plays an effect on the performance of different storage allocation modes, see Figure 4 and 5 for test results.

As shown in Figure 4, the network transmission success rate significantly doubles as the storage resources in the data forwarding node build up. Assume the service demand remains unchanged, it tends to be stable when the memory resources increase to a certain extent. The two sets of curves are contrasted, the routing algorithm based on the weighted max-min fairness principle is obviously superior to that based on the first-come-first serve allocation mode in terms of success rate.

Figure 4. Relationship between storage and routing transmission success rate of data forwarding node

As shown in Figure 5, the routing algorithm based on different storage allocation modes significantly reduces the average network delay as the storage resources in data forwarding nodes continue to proliferate, while the routing algorithm based on the weighted max-min fairness principle performs significantly better than that based on the first-come first-serve mode.

Figure 5. Relationship between storage and route average delays of data forwarding nodes

3.2.2 Relationship between the number of data forwarding nodes and the performance of storage allocation modes

This section discusses the relationship of the number of data forwarding nodes with two algorithms based on the weighted max-min and first come first serve allocation modes. Based on the simulation scenario of Table 1, the storage resource in the data forwarding node is set to 50 Mb. Now we examine how the performance of algorithms using different storage allocation modes is subject to the number of data forwarding nodes by modifying the number of data forwarding nodes. See Figure 6 and 7 for test results.

As shown in Figure 6, in term of network transmission success rate, the number of data forwarding nodes is proportional to the network transmission success rate. The network transmission gets a higher success rate as the data forwarding nodes increase. The routing algorithm based on the weighted max-min fairness principle also obtains a success rate higher than that based on the first-come first-serve allocation mode.

Figure 6. Relationship between the number of data forwarding nodes and route transmission success

As shown in Figure 7, the proliferation of the data forwarding nodes is highly effective to reduce the average latency of the DTN network. Both routing algorithms significantly reduce the average network latency, especially the routing algorithm optimized by the weighted max-min fairness principle.

Figure 7. Relationship between the number of data forwarding nodes and the average route time delay

Main reason for the above results are given as follows: (1) When the number of data forwarding nodes is constant, a larger storage space can serve more data nodes so as to improve the network transmission success rate; (2) On the premise that the storage resources are constant, the storage resources are allocated based on the weighted max-min fairness principle to provide services for more nodes, thus shortening the average latency time of data nodes, and reducing network delay; (3) The more the data forwarding nodes, the more opportunities the data nodes can be served, so that the DTN network transmission success rate and average delay will be improved accordingly.

 

3.2.3 Relationship between the number of data nodes and the performance of the storage allocation mode

Figure 8. Relationship between the number of data nodes and the route transmission success rate

This section describes the relationship between the number of data nodes and two algorithms based on the weighted max-min fairness principle and the first-come-first-serve allocation modes. Based on the simulation situations in Table 1, the number of data forwarding nodes is set to 5; the size of storage space is 50MB, and the number of data nodes around each core gradually increases. By modifying the number of data nodes, we observe how the network routing performance of algorithms based on different storage allocation modes is subjected to change with the number of data nodes. See Figure 8 and 9 for test results.

As shown in Figure 8, on the premise that the data forwarding nodes and their storage resources are constant, as the number of data nodes increases, the network transmission success rate decreases but is relatively moderate since the simulated nodes never proliferate intensely. It is conceivable that the network success rate will be significantly poor if data nodes increase sharply. In contrast, the storage resource allocation mode based on weighted max-min fairness principle is still significantly higher than that based on first-come-first-serve allocation mode in term of network transmission success rate, but as the number of data nodes continues to grow, the success rates of both will get closer and closer.

Figure 9. Relationship between the number of data nodes and the average route time delay

As shown in Figure 9, as data nodes multiply, the average delays of DTN networks using the above two allocation modes significantly extend. Among them, the algorithm improved by the weighted max-min fairness principle obtains a significantly lower network delay, but with the proliferation of data nodes, the network delays of both will gradually approach to each other.

The main reasons for the above results are given as follows: (1) When the number of data nodes increases, there is the surge of data to be sent. When the number of data forwarding nodes and storage resources get constant, the network routing performance is bound to be poor. (2) When the number of data nodes is constant, the storage resource allocation mode based on the weighted max-min fairness principle can serve for more nodes, thus reducing the average latency time and network delay; (3) The more the data nodes, while the data forwarding nodes are constant, the overall performance of DTN network will get poor accordingly. Therefore, when the network data nodes proliferate, in order to ensure the performance of the DTN network, it is required to increase the number of data forwarding nodes or free up the storage space of the data forwarding nodes.

4. Conclusion

To improve the performance of the routing algorithm in the DTN network, this paper applies the weighted max-min fairness principle to the allocation of storage resources based on the storage resource allocation optimization program at data forwarding nodes. Compared with the first-come first-serve allocation mode, this mode can improve both network transmission success rate and average network delay, especially the performance of DTN network. The optimization of storage allocation is an important means for improving the network service quality.

Acknowledgment

This work has been partially supported by National advanced technology development Project (2015AA015701) of The Ministry of Science and Technology of China, and Science and Technology Development Project (20190302070GX) and Science and Technology Project (JJKH20190598KJ) of Jilin Province of China.

  References

[1] Burleigh S, Hooke A, Torgerson L, Fall K, Cerf V, Durst B, Scott K, Weiss H. (2003). Delay-tolerant networking: An approach to interplanetary Internet. IEEE Communications Magazine 41(6): 128-136. https://doi.org/10.1109/MCOM.2003.1204759

[2] Cerf V, Burleigh S, Hooke A, Torgerson L, Durst R. (2007). RFC 4838: Delay-tolerant networking architecture. USA: IETF.

[3] Fall K. (2003). A delay-tolerant network architecture for challenged internets. Computer Communications 27-29. https://doi.org/10.1145/863955.863960

[4] Ochiai H, Ishizuka H, Kawakami Y, Esaki H. (2011). A DTN-based sensor data gathering for agricultural applications. IEEE Sensors Journal 11(11): 2861-2868. https://doi.org/10.1109/JSEN.2011.2170562

[5] Caini C, Cornice P, Firrincieli R, Lacamera D. (2008). A DTN approach to satellite communications. IEEE Journal on Selected Areas in Communications 26(5): 820-827. https://doi.org/10.1109/JSAC.2008.080608

[6] Juang P, Oki H, Wang Y, Martonosi M, Peh L S, Rubenstein D. (2002). Energy-efficient computing for wildlife tracking: Design tradeoffs and early experiences with ZebraNet. Acm Sigops Operating Systems Review 36(5): 96-107. https://doi.org/10.1145/605397.605408

[7] Nace D, Pioro M. (2008). Max-min fairness and its applications to routing and load-balancing in communication networks: A tutorial. Communications Surveys & Tutorials IEEE 10(4): 4-17. https://doi.org/10.1109/SURV.2008.080403

[8] Zhao W, Ammar M, Zegura E. (2004). A message ferrying approach for data delivery in sparse mobile Ad Hoc networks. 5th ACM International Symposium on Mobile Ad Hoc Networking and Computing, pp. 187-198. https://doi.org/10.1145/989459.989483

[9] Vahdat A, Becker D. (2000). Epidemic routing for partially connected ad hoc networks. Technical Report CS-200006.

[10] Lindgren A, Doria A, Schelén O. (2003). Probabilistic routing in intermittently connected networks. Advanced Optimal Buffer Scheduling Policy in Opportunistic Networks 7(3): 19-20. https://doi.org/10.1145/961268.961272

[11] Spyropoulos T, Psounis K, Raghavendra CS. (2005). Spray and wait: An efficient routing scheme for intermittently connected mobile networks. 2005 ACM SIGCOMM Workshop on Delay-tolerant Networking, pp. 252-259. https://doi.org/10.1145/1080139.1080143

[12] Burgess J, Gallagher B, Jensen D, Levine BN. (2006). MaxProp: Routing for vehicle-based disruption- tolerant networks. IEEE INFOCOM.

[13] Keränen A, Ott J, Kärkkäinen T. (2009). The ONE simulator for DTN protocol evaluation. 2nd International Conference on Simulation Tools and Techniques 55. https://doi.org/10.4108/ICST.SIMUTOOLS2009.5674

[14] Balasubramanian A, Levine B, Venkataramani A. (2007). DTN routing as a resource allocation problem. ACM SIGCOMM Computer Communication Review 37(4): 373-384. https://doi.org/10.1145/1282427.1282422