dc.description.abstract | Optimizing Interest forwarding and Data delivery has been among
the top dissected problems in NDN for the last decade; however, only
a few contributions thrive to minimize communication cost and delay
concurrently. In NDN, a receiver-driven forwarding strategy is considered
resource-consuming as the routers incur computation to find
the best path to the desired item, specified by an Interest’s name. On
the other hand, a source-driven forwarding strategy, a scheme that
suppresses the sub-optimal sources, experiences increased delay when
no source answers in the exploration phase. The confluence of the two
strategies can counteract the drawbacks of each one, which, however,
has never been investigated. In this work, a reinforcement learningbased,
namely Thompson Sampling, strategy is proposed that operates
in a receiver-cum source driven fashion to optimize Interest forwarding
and answering. The proposed method introduces a ’Beam’
concept coupled with adaptive scoped-flooding to optimize Interest
forwarding, and the sources adopt Thompson Sampling to suppress
the sub-optimal responses. When hit by an Interest, an optimal source
sends back the desired Data to the consumer whereas a sub-optimal
source remains Silent. Together, the ’Beam’ and the scoped-flooding
adapt the Interest forwarding range based on cache hit/miss ratio.
The adaptation optimizes communication cost and delay, and contributes
to scheming the proposed strategy resource-savvy. The proofof-
concept implementation in software (simulation) reveals that the
proposed system outperforms the counterpart benchmarks by reducing
the communication costs and delay in NDN (by around 350% and
10%, respectively) without negotiating packet delivery ratio. | en_US |