Below is a comprehensive list of publications, journal papers, conference papers, books and planned releases. You can sort by year, type of paper or
presentation under the "All types" menu below. PDF's are also provided where possible. You can read the abstracts by clicking on the title.
The 3GPP Release-13 has introduced a narrowband system, namely Narrow- band Internet of Things (NB-IoT), to provide low-power, wide-area cellular connectivity for the Internet of Things. NB-IoT uses a design similar to Long Term Evolution (LTE), but it makes essential modications for reducing the device complexity. NB-IoT is optimized for machine type communications, and it aims to increase coverage, reduce overhead and reduce power consump- tion while increasing capacity. In this paper, we present our testbed-based experimental study on the operation of NB-IoT systems in the presence of pulsed radar signals. We leverage results from our experiments in providing a comprehensive analysis on the impact of coverage and capacity of a NB-IoT base-station when it shares an uplink channel with S-band pulsed radars. Our results indicate that the NB-IoT cell coverage is aected in the presence of radar interference.
Radio propagation models play a crucial role in realizing effective spectrum sharing. Unlike propagation models that do not use the exact details of terrain, terrain-based propagation models are effective in identifying spatial spectrum sharing opportunities for the secondary users (SUs) around an incumbent user (IU). Unfortunately, terrain-based propagation models, such as the Irregular Terrain Model (ITM) in point-to-point (PTP) mode, are computationally expensive, and they require precise geo-locations of the SUs. Such requirements render them challenging, if not impractical, to implement in real-time applications, such as geolocation database (GDB)-driven spectrum sharing. To address this problem, we propose a pragmatic approach called Tool for Enabling Spatial Spectrum Sharing Opportunities (TESSO). TESSO characterizes the aggregate interference caused by the SUs and identifies spatial spectrum sharing opportunities effectively. It is computationally efficient, and does not require precise geo-locations of the SUs. Our results show that TESSO provides the same level of interference protection guarantee to the IU as that offered by the terrain-based models. TESSO can be implemented in GDB-driven spectrum sharing ecosystems for effectively exploiting spatial spectrum sharing opportunities.
Index Terms—Dynamic Spectrum Access, Spectrum Sharing, Radio Propagation Model, Aggregate Interference, Geolocation Databases, Exclusion Zones.
The long-term evolution (LTE) has spread around the globe for deploying 4G cellular networks for commercial use. These days, it is gaining interest for new applications where mobile broadband services can be of benefit to society. Whereas the basic concepts of LTE are well understood, its long-term evolution has just started. New areas of R&D look into operation in unlicensed and shared bands, where new versions of LTE need to coexist with other communication systems and radars. Virginia Tech has developed an LTE testbed with unique features to spur LTE research and education. This paper introduces Virginia Tech’s LTE testbed, its main features and components, access and configuration mechanisms, and some of the research thrusts that it enables. It is unique in several aspects, including the extensive use of software-defined radio technology, the combination of industry-grade hardware and software-based systems, and the remote access feature for user-defined configurations of experiments and radio frequency paths.
The 5.9 GHz band is being actively explored for possible spectrum sharing opportunities between Dedicated Short Range Communications (DSRC) and IEEE 802.11ac networks in order to address the increasing demand for bandwidth-intensive Wi-Fi applications. In this paper, we study the implications of this spectrum sharing to the performance of Wi-Fi systems. Through experiments performed on our testbed, we first investigate band sharing options available for Wi-Fi devices. Using experimental results, we show the need for using conservative Wi-Fi transmission parameters to enable harmonious coexistence between DSRC and Wi-Fi. Moreover, we show that under the current 802.11ac standard, certain channelization options, particularly the high bandwidth ones, cannot be used by Wi-Fi devices without causing interference to the DSRC nodes. Under these constraints, we propose a Real-time Channelization Algorithm (RCA) for Wi-Fi Access Points (APs) operating in the shared spectrum. Evaluation of the proposed algorithm using a prototype implementation on commodity hardware as well as via simulations show that informed channelization decisions can significantly increase Wi- Fi throughput compared to static channelization schemes.
Although using geolocation databases for spectrum sharing has many pragmatic advantages, it also raises potentially serious operational security (OPSEC) issues. OPSEC is especially a paramount consideration in the light of recent calls in the U.S. for spectrum sharing between federal government (including military) systems and non-government systems (e.g., cellular service providers). In this paper, we explore the OPSEC, location privacy in particular, of incumbent radars in the 3:5 GHz band. First, we show that adversarial secondary users can easily infer the locations of incumbent radars by making seemingly innocuous queries to the database. Then, we propose several obfuscation techniques that can be implemented by the database for countering the inference attacks. We also investigate the inherent tradeoff between the degree of obfuscation and spectrum utilization efficiency. Finally, we validate our discussions by providing results from extensive simulations.
We are in the midst of a major paradigm shift in how we manage radio spectrum. This paradigm shift is necessitated by the growth of wireless services of all types and the demand pressure imposed on limited spectrum resources under legacy management regimes. The shift is feasible because of advances in radio and networking technologies that make it possible to share spectrum dynamically in all possible dimensions—i.e., across frequencies, time, location, users, uses, and networks. Realizing the full potential of this shift to Dynamic Spectrum Sharing will require the co-evolution of wireless technologies, markets, and regulatory policies; a process which is occurring on a global scale. This paper provides a current overview of major technological and regulatory reforms that are leading the way toward a global paradigm shift to more flexible, dynamic, market-based ways to manage and share radio spectrum resources. We focus on current efforts to implement database-driven approaches for managing the shared co-existence of users with heterogeneous access and interference protection rights, and discuss open research challenges.
The legacy concept of exclusion zones (EZs) is inept at enabling efficient utilization of fallow spectrum by secondary users (SUs), since legacy EZs are static and overly-conservative. The notion of a static EZ implies that it has to protect incumbent users (IUs) from the union of likely interference scenarios, leading to a worst-case, conservative solution. In this paper, we propose the concept of dynamic, multi-tier EZs, which takes advantage of participatory spectrum sensing carried out by SUs to support efficient database-driven spectrum sharing while protecting IUs against SU-induced aggregate interference. Specifically, the database directly incentivizes SUs to participate in spectrum sensing, which augments geolocation database by defining smaller EZs with dynamic boundaries and creating additional spectrum access opportunities for SUs. We propose an incentive mechanism based on a two-level game-theoretic model, in which the database conducts dynamic pricing in a first-level Stackelberg game in the presence of SUs who strategically contribute to spectrum sensing in a second-level stochastic game. The existence of an equilibrium solution is proven. According to our findings, the proposed incentive mechanism for the concept of dynamic, multitier EZs is effective to improve spectrum utilization efficiency while guaranteeing incumbent protection.
Many web applications provide secondary authentication methods, i.e., secret questions (or password recovery questions), to reset the account password when a user’s login fails. However, the answers to many such secret questions can be easily guessed by an acquaintance or exposed to a stranger that has access to public online tools (e.g., online social networks); moreover, a user may forget her/his answers long after creating the secret questions. Today’s prevalence of smartphones has granted us new opportunities to observe and understand how the personal data collected by smartphone sensors and apps can help create personalized secret questions without violating the users’ privacy concerns. In this paper, we present a SecretQuestion based Authentication system, called “Secret-QA”, that creates a set of secret questions on basic of people’s smartphone usage. We develop a prototype on Android smartphones, and evaluate the security of the secret questions by asking the acquaintance/stranger who participate in our user study to guess the answers with and without the help of online tools; meanwhile, we observe the questions’ reliability by asking participants to answer their own questions. Our experimental results reveal that the secret questions related to motion sensors, calendar, app installment, and part of legacy app usage history (e.g., phone calls) have the best memorability for users as well as the highest robustness to attacks.
Group signatures (GSs) is an elegant approach for providing privacy-preserving authentication. Unfortunately, modern GS schemes have limited practical value for use in large networks due to the high computational complexity of their revocation check procedures. We propose a novel GS scheme called the Group Signatures with Probabilistic Revocation (GSPR), which significantly improves scalability with regard to revocation. GSPR employs the novel notion of probabilistic revocation, which enables the veri er to check the revocation status of the private key of a given signature very effciently. However, GSPR's revocation check procedure produces probabilistic results, which may include false positive results but no false negative results. GSPR includes a procedure that can be used to iteratively decrease the probability of false positives. GSPR makes an advantageous trade-off between computational complexity and communication overhead, resulting in a GS scheme that off ers a number of practical advantages over the prior art. We provide a proof of security for GSPR in the random oracle model using the decisional linear assumption and the bilinear strong Di ffie-Hellman assumption.
In spectrum sharing, a spatial separation region is defined around primary users (PUs) to protect them from secondary user (SU)-induced interference. This protection region— referred to by a number of names, such as an exclusion zone (EZ) or a protection zone (PZ)—has a static boundary, and this boundary is determined very conservatively to provide an additional margin of protection for the PUs. This legacy notion of interference protection is overly rigid, and often results in poor spectrum utilization efficiency. In this paper, we propose a novel framework for prescribing interference protection for the PUs that addresses some of the limitations of legacy EZs. Specifically, we introduce the concept of Multi-tiered Incumbent Protection Zones (MIPZ), and show that it can be used to dynamically adjust the PU’s protection boundary based on the radio environment, network conditions, and the PU interference protection requirement. MIPZ can serve as an analytical framework for quantitatively analyzing a given PZ to gain insights on and determine the tradeoffs between interference protection and spectrum utilization efficiency. It allows a number of SUs, say N, to operate closer to the PU, and improves the overall spectrum utilization efficiency while ensuring a probabilistic guarantee of interference protection to the PU. We leverage the combined power of database-driven spectrum sharing and stochastic optimization theory for dynamically computing the zone boundary and the value of N. Using extensive simulation results, we demonstrate that the proposed framework improves spectrum utilization efficiency by adapting to the changing interference environment through dynamic adjustments of the zone boundary.
Abstract—Spectrum security and enforcement is one of the major challenges that need to be addressed before spectrum sharing technologies can be adopted widely. The problem of rogue transmitters is a major threat to the viability of spectrum sharing. One approach for deterring rogue transmissions is to enable receivers to authenticate or uniquely identify transmitters. Although cryptographic mechanisms at the higher layers have been widely used to authenticate transmitters, the ability to authenticate transmitters at the physical (PHY) layer has a number of key advantages over higher-layer approaches. In existing schemes, the authentication signal is added to the message signal in such a way that the authentication signal appears as noise to the message signal and vice versa. Hence, existing schemes are constrained by a fundamental tradeoff between the message signal’s signal-to-noise ratio (SNR) and the authentication signal’s SNR. In this paper, we extend the Precoded Duobinary Signaling (P-DS) technique to devise a new PHY-layer authentication scheme called P-DS for Authentication (P-DSA). P-DSA exploits the redundancy introduced by P-DS to embed the authentication signal into the message signal. PDSA is not constrained by the aforementioned tradeoff between the message and authentication signals. Our results show that P-DSA improves the detection performance compared to the prior art without sacrificing message throughput or increasing transmission power.
Reducing the size of exclusion zones (EZs) in spectrum sharing is vital for efficient utilization of fallow spectrum as well as for the economic viability of spectrum sharing itself. In this paper, we explore two approaches for reducing the size of EZs. We show that multi-tiered EZs can be used to improve spectrum utilization efficiency by implementing the concept of differential spectrum access hierarchy. Also, we provide quantitative results that show the impact of using a point-to-point mode terrain profile in calculating an EZ’s contour. Such a terrain profile captures the effects of propagation losses due to area-specific topography, which are not considered by the Fcurves, a common method of calculating an EZ’s boundary. Our results indicate that the use of such a terrain profile results in a noticeable decrease in the size of an EZ.
We propose Precoded SUbcarrier Nulling (PSUN), a transmission strategy for OFDM-based wireless communication networks (SCN, Secondary Communication Networks) that need to coexist with pulsed radar systems. It is a novel null-tone allocation method that effectively mitigates inter-carrier interference (ICI) remaining after pulse blanking (PB). When the power from the radar’s pulse interference is high, the SCN Rx needs to employ PB to mitigate the interference power. Although PB is known to be an effective technique for suppressing pulsed interference, it magnifies the effect of ICI in OFDM waveforms, and thus degrades bit error rate (BER) performance. For more reliable performance evaluation, we take into account two characteristics of the incumbent radar significantly affect the performance of SCN: (i) antenna sidelobe and (ii) out-of-band emission. Our results show that PSUN effectively mitigates the impact of ICI remaining after PB.
Cognitive radio (CR) technologies have led to several wireless standards (e.g., IEEE 802.11af and IEEE 802.22) that enable secondary networks to access the TV white-space (TVWS) spectrum. Different unlicensed wireless technologies with different PHY/MAC designs are expected to coexist in the same TVWS spectrum—we refer to such a situation as heterogeneous coexistence. The heterogeneity of the PHY/MAC designs of the coexisting CR networks can potentially exacerbate the hidden terminal problem. This problem cannot be addressed by the conventional handshaking/coordination mechanism between two homogeneous networks employing the same radio access technology (RAT). In this paper, we present a coexistence protocol, called Spectrum Sharing for Heterogeneous Coexistence (SHARE), that mitigates the hidden terminal problem for the coexistence between two types of networks: one that employs a TDM-based MAC protocol and one that employs a CSMA-based MAC protocol. Specifically, SHARE utilizes beacon transmissions and dynamic quiet periods to avoid packet collisions caused by the hidden terminals. Our analytical and simulation results show that SHARE reduces the number of packet collisions and guarantees weighted-fairness in partitioning the spectrum among the coexisting secondary networks.
Conference Paper2014 ACM Conference on Computer and Communications Security (CCS), Arizona, USA, 12 pp., November 2014
Conference Paper42 nd Research Conference on Communication, Information, and Internet Policy (TPRC), Arlington, VA, 12 pp., Sept. 2014
Abstract: The role of expanding spectrum as a contributor to economic growth was highlighted in the National Broadband Plan and in the President’s Council of Advisors on Science and Technology report entitled “Realizing the Full Potential of Government-Held Spectrum to Spur Economic Growth.” Recommendations in the PCAST report include sharing underutilized Federal spectrum and identifying 1,000 MHz of Federal spectrum to create “the first shared-use spectrum superhighways.” To realize this vision, fundamentally new spectrum access technologies will be developed; therefore, it is important to understand security and privacy implications for these possible new designs. Security and privacy become especially critical concerns in light of the increasing prospects of spectrum sharing between federal government systems and non-government systems. The likelihood of such a spectrum-sharing scenario was heightened by the Federal Communications Commission’s notice of proposed rulemaking (NPRM) for the 3.5 GHz band. The NPRM outlines a geo-location database-driven spectrum sharing scenario where Incumbent Users—namely, federal government, including military, users and fixed satellite service licensees—share spectrum with Secondary Users operating small-cell technologies on an unlicensed basis. Although privacy issues are critical in such a spectrum-sharing scenario, there is little research on those problems. This paper identifies privacy issues and related laws related to geo-location database driven spectrum sharing. It considers the different issues that will arise depending on the basic design choices for the spectrum sharing system; for example, either a government or private entity might maintain a spectrumsharing database. It analyzes spectrum sharing from the viewpoint of geo location and the evolving expectation of privacy in location information. The paper identifies questions to be addressed in future spectrum sharing design, and suggests areas for increased legal attention.
Recent advances in spectrum access technologies, such as cognitive radios, have made spectrum sharing a viable option for addressing the spectrum shortage problem. However, these advances have also contributed to the increased possibility of "hacked" or "rogue" radios causing harm to the spectrum sharing ecosystem by causing significant interference to other wireless devices. One approach for countering such threats is to employ a scheme that can be used by a regulatory entity (e.g., FCC) to uniquely identify a transmitter by authenticating its waveform. This enables the regulatory entity to collect solid evidence of rogue transmissions that can be used later during an adjudication process. We coin the term Blind Transmitter Authentication (BTA) to refer to this approach. Unlike in the existing techniques for PHY-layer authentication, in BTA, the entity that is authenticating the waveform is not the intended receiver. Hence, it has to extract and decode the authentication signal "blindly" with little or no knowledge of the transmission parameters. In this paper, we propose a novel BTA scheme called Frequency offset Embedding for Authenticating Transmitters (FEAT). FEAT embeds the authentication information into the transmitted waveform by inserting an intentional frequency offset. Our results indicate that FEAT is a practically viable approach and is very robust to harsh channel conditions. Our evaluation of FEAT is based on theoretical bounds, simulations, and indoor experiments using an actual implementation.
Cognitive radio is one of the innovative technologies that has the potential to effectively address the spectrum shortage problem and radically change the way we utilize spectrum. Because of its potential impact, various stakeholders—including regulatory policy makers, wireless device manufacturers, telecommunication operators, and academic researchers—have shown strong interest in it, especially with respect to research and development. Although numerous journal and conference publications, tutorials, and books on cognitive radio have been published in the last few years, the vast majority of them focus on the various physical-layer attributes of the technology. More importantly, these technical publications discuss the cognitive radio in isolation, essentially as a standalone system or network, with little regard for how it may interact with legacy wireless systems or how heterogeneous cognitive radio systems may collaborate with each other. Although this book’s main theme is cognitive radio, its specific focus areas are quite different from the existing literature. The primary aim of this book is to provide a comprehensive discussion on how cognitive radio technologies can be employed to enable efficient and harmonious coexistence of homogeneous as well as heterogeneous wireless systems and networks. Because the discussions in the book focus on the problem of coexistence of v wireless systems, most of the book’s contents relate to the medium access control layer, rather than the physical layer. In other words, the discussions in this book revolve around how cognitive radio technologies can be used to enable various wireless networks to coexist and efficiently share spectrum. The intended readership of this book includes wireless communications industry researchers and practitioners as well as researchers in academia. The readership is assumed to have background knowledge in wireless communications and networking, although they may have no in-depth knowledge of cognitive radio technologies. The intention of this book is to introduce communication generalists to the technical challenges of the various coexistence techniques and mechanisms as well as solution approaches which are enabled by cognitive radios. This book is available at Amazon.com.
Conference Paper2014 ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc), pp. 1656-174, Aug. 2014.
In cognitive radio (CR) networks, a pair of CR nodes have to “rendezvous” on a common channel for link establishment. Channel hopping (CH) protocols have been proposed for creating rendezvous over multiple channels to reduce the possibility of rendezvous failures caused by the detection of primary user signals. Rendezvous within a minimal bounded time over multiple channels is a challenging problem in heterogeneous CR networks where two CR nodes may have asynchronous clocks, different sensing capabilities, no common universal channel set, and heterogeneous channel index systems. In this paper, we present a systematic approach using group theory for designing CH protocols that guarantee the maximum number of rendezvous channels and the minimal time-to-rendezvous (TTR) in heterogeneous environments. We derive the minimum upper bound of TTR, and propose two types of rendezvous protocols that are independent of environmental heterogeneity. Analytical and simulation results show that these protocols are resistant to rendezvous failures under various network conditions.
Conference Paper2014 ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc), pp. 215-224 Aug. 2014
Abstract: In database-driven opportunistic spectrum access, location information of secondary users plays an important role. In a database query-and-update procedure, a secondary user reports to the geolocation database of its location information, so that the updated knowledgebase facilitates location-aided incumbent protection and network coexistence. However, such database-driven spectrum sharing becomes very challenging when the secondary users are mobile. In this paper, we propose a probabilistic coexistence framework that supports mobile users by incorporating the solutions to solve two core problems: (i) white space allocation (WSA) at the database and (ii) location update control (LUC) at the users. We frame the two problems such that they interact through dynamic control of the users' location uncertainty levels. For WSA, we derive a centralized real-time solution to mitigate mutual interference among secondary users and protect primary users against harmful interference. For LUC, we design a local two-level strategy to enable both movement-driven and interference-driven control of location uncertainty. This strategy makes an appropriate trade-off between the effectiveness of interference mitigation and the cost of database queries. To evaluate our algorithms, we have carried out both theoretical model-driven and real-world trace-driven simulation experiments. Our simulation results show that the proposed framework can determine and adapt the database query intervals of mobile users to achieve near-optimal interference mitigation with minimal location updates.
The depletion of usable radio frequency spectrum has stimulated increasing interest in dynamic spectrum access technologies, such as cognitive radio (CR). In a scenario where multiple co-located CR networks operate in the same swath of white-space (or unlicensed) spectrum with little or no direct coordination, co-channel self-coexistence is a challenging problem. In this paper, we focus on the problem of spectrum sharing among coexisting CR networks that employ orthogonal frequency division multiple access (OFDMA) in their uplink and do not rely on inter-network coordination. An uplink soft frequency reuse (USFR) technique is proposed to enable globally power-efficient and locally fair spectrum sharing. We frame the self-coexistence problem as a non-cooperative game. In each network cell, uplink resource allocation (URA) problem is decoupled into two subproblems: subchannel allocation (SCA) and transmit power control (TPC). We provide a unique optimal solution to the TPC subproblem, while present a low-complexity heuristic for the SCA subproblem. After integrating the SCA and TPC games as the URA game, we design a heuristic algorithm that achieves the Nash equilibrium in a distributed manner. In both multi-operator and single-operator coexistence scenarios, our simulation results show that USFR significantly improves self-coexistence in spectrum utilization, power consumption, and intra-cell fairness. The preliminary results of this paper have been presented in part in IEEE INFOCOM 2012. This work was supported in part by the National Science Foundation under grants CNS-0746925, CNS-0831865, and CNS-0910531, and the Institute for Critical Technology and Applied Science at Virginia Tech.
Conference Paper2014 IEEE Int’l Conference on Computer Communications (INFOCOM), pp. 2715-2723, April-May, 2014.
Abstract: The coexistence of cognitive radio (CR) networks in the same swath of spectrum has become an increasingly important problem, which is especially challenging when coexisting networks are heterogeneous (i.e., use different air interface standards), such as the case in TV white spaces. In this paper, we propose a credit-token-based spectrum etiquette framework that enables spectrum sharing among distributed heterogeneous CR networks with equal priority. Specifically, we propose a game-auction coexistence framework. Each network acts as either an offerer or a requester, and coexists with other networks via a non-cooperative game and a truthful multi-winner auction. The framework addresses the trade-offs among social welfare and offerer’s revenue in the auction and requester’s utility in the game. We prove that the framework guarantees system stability. Our simulation results show that the proposed coexistence framework always converges to a near-optimal distributed solution and improves coexistence fairness and spectrum utilization.
Conference PaperIEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), pp. 236-247, April 2014; This paper was awarded the Best Paper Award.
Abstract: Although using geolocation databases is a practical approach for enabling spectrum sharing, it poses a potentially serious privacy problem. Secondary users (queriers), through seemingly innocuous queries to the database, can determine the types and locations of incumbent systems operating in a given region of interest, and thus compromise the incumbents’ operational privacy. When the incumbent systems (primary users) are commercial systems, this is typically not a critical issue. However, if the incumbents are federal government systems,including military systems, then the information revealed by the databases can lead to a serious breach of operational privacy. In this paper, we propose privacy-preserving mechanisms and techniques for an obfuscated geolocation database that can enable the coexistence of primary and secondary users while preserving the operational privacy of the primary users.
Abstract: When different stakeholders share a common resource, such as the case in spectrum sharing, security and enforcement become critical considerations that affect the welfare of all stakeholders. Recent advances in radio spectrum access technologies, such as cognitive radios, have made spectrum sharing a viable option for significantly improving spectrum utilization efficiency. However, those technologies have also contributed to exacerbating the difficult problems of security and enforcement. In this paper, we review some of the critical security and privacy threats that impact spectrum sharing. We propose a taxonomy for classifying the various threats, and describe representative examples for each threat category. We also discuss threat countermeasures and enforcement techniques, which are discussed in the context of two different approaches: ex ante (preventive) and ex post (punitive) enforcement.
Conference Paper 2014 International Conference on Computing, Networking, and Communications (ICNC), pp. 782-786, Feb. 2014.
In a cognitive radio network, the non-conforming behavior of rogue transmitters is a major threat to opportunistic spectrum access. One approach for facilitating spectrum enforcement and security is to require every transmitter to embed a uniquely-identifiable authentication signal in its waveform at the PHY-layer. In existing PHY-layer authentication schemes, known as blind signal superposition, the authentication/identification signal is added to the message signal as noise, which leads to a tradeoff between the message signal’s signal-to-noise (SNR) and the authentication signal’s SNR under the assumption of constant average transmitted power. This implies that one cannot improve the former without scarifying the latter, and vice versa. In this paper, we propose a novel PHY-layer authentication scheme called hierarchically modulated duobinary signaling for authentication (HM-DSA). HM-DSA introduces some controlled amount of inter-symbol interference (ISI) into the message signal. The redundancy induced by the addition of the controlled ISI is utilized to embed the authentication signal. Our scheme, HM-DSA, relaxes the constraint on the aforementioned tradeoff and improves the error performance of the message signal as compared to the prior art.
Conference Paper 2014 ACM Int’l Conference on Ubiquitous Information Management and Communication (IMCOM), Jan. 2014.
Policy-based cognitive radios (CRs) contain policy conformance components that are responsible for assuring the conformance of the radio’s transmission behavior to current active set of policies. The policy enforcer (PE) takes the central role in ensuring regulatory system policies of a CR. Several proposed architectures for CRs and software defined radios (SDRs) deploy the radio components as part of a distributed system using middleware such as CORBA. In this paper, we perform an indepth analysis of the requirements of a PE in a distributed system implementation. To this end, we describe a cache-based PE as part of a distributed CR system using a policy reasoner (PR) and CORBA middleware. We present a novel approach to maintain cache coherency using meta-policies from the PR. We also study the trade-off relationship between performance and security in distributed policy-based CR systems. We focus on vulnerabilities in the transport mechanism and the problem of implicit authorization. We discuss methods for securing policy-based CR systems using secure inter-object communications along with policy conformance components.
This paper focuses on the problem of spectrum sharing between secondary networks that access spectrum opportunistically in TV spectrum. Compared to the coexistence problem in the ISM (Industrial, Scientific and Medical) bands, the coexistence situation in TV whitespace (TVWS) is potentially more complex and challenging due to the signal propagation characteristics in TVWS and the disparity of PHY/MAC strategies employed by the systems coexisting in it. In this paper, we propose a novel decision making algorithm for a system of coexistence mechanisms, such as an IEEE 802.19.1-compliant system, that enables coexistence of dissimilar TVWS networks and devices. Our algorithm outperforms existing coexistence decision making algorithms in terms of fairness, and percentage of demand serviced.
A number of wireless standards (e.g., IEEE 802.11af and IEEE 802.22) have been developed or are currently being developed for enabling opportunistic access in white space. When heterogeneous wireless networks that are based on different wireless standards operate in the same spectrum, coexistence issues can potentially cause major problems. Enabling collaborative coexistence via direct coordination between heterogeneous wireless networks is very challenging due to incompatible MAC/PHY designs of coexisting networks. Moreover, the direct coordination would require competing networks or service providers to exchange sensitive control information that may raise conflict of interest issues and customer privacy concerns. In this paper, we present an architecture for enabling collaborative coexistence of heterogeneous wireless networks over white space, called Symbiotic Heterogeneous coexistence ARchitecturE (SHARE). By mimicking the symbiotic relationships (i.e., the interspecific competition process) between heterogeneous organisms in a stable ecosystem, SHARE establishes an indirect coordination mechanism for spectrum sharing between heterogeneous wireless networks via a mediator system, which avoids the drawbacks of direct coordination. Analytical and simulation results show that SHARE allocates spectrum among coexisting networks in a weighted-fair manner without any inter-network direct coordination.
Spectrum security and enforcement is one of the major challenges that need to be addressed before spectrum-agile and opportunistic spectrum access technologies can be deployed. Rogue transmitters are a major threat to opportunistic spectrum access. One approach for deterring rogue transmissions is to enable receivers to authenticate or uniquely identify secondary transmitters. Although cryptographic mechanisms at the higher layers have been widely used to authenticate transmitters, the ability to authenticate transmitters at the physical (PHY) layer has a number of key advantages over higher-layer approaches. In existing schemes, the authentication signal is added to the message signal in such a way that the authentication signal appears as noise to the message signal and vice versa. Hence, existing schemes are constrained by a fundamental tradeoff between the message signal’s signal-to-noise ratio (SNR) and the authentication signal’s SNR. In this paper, we propose a novel PHY-layer authentication scheme called Precoded Duobinary Signaling for Authentication (P-DSA). P-DSA introduces some controlled amount of inter-symbol interference (ISI) into the data stream. The addition of the controlled ISI introduces redundancy in the message signal which can be utilized to embed the authentication signal. In this way, P-DSA relaxes the constraint on the aforementioned tradeoff. Our results show that P-DSA achieves superior detection performance compared to the prior art without sacrificing message throughput or increasing power.
Cognitive radios have applied various forms of artificial intelligence (AI) to wireless systems in order to solve the complex problems presented by proper link management, network traffic balance, and system efficiency. Case-based reasoning (CBR) has seen attention as a prospective avenue for storing and organizing past information in order to allow the cognitive engine to learn from previous experience. CBR uses past information and observed outcome to form empirical relationships that may be difficult to model using theory. As wireless systems become more complex and more tightly time constrained, scalability becomes an apparent concern to store large amounts of information over multiple dimensions. This paper presents a quickly accessible data structure designed to reduce access time several orders of magnitude as opposed to traditional similarity calculation methods. A framework is presented for case representation, which provides the core of useful information contained within a case. By grouping possible similarity dimension values into distinct partitions called buckets.
Children's privacy in the online environment has become critical. Use of the Internet is increasing for commercial purposes, in requests for information, and in the number of children who use the Internet for casual web surfing, chatting, games, schoolwork, e-mail, interactive learning, and other applications. Often, websites hosting these activities ask for personal information such as name, e-mail, street address, and phone number. In the United States, the children's online privacy protection act (COPPA) of 1998 was enacted in reaction to widespread collection of information from children and subsequent abuses identified by the Federal Trade Commission (FTC). COPPA is aimed at protecting a child's privacy by requiring parental consent before collecting information from children under the age of 13. To date, however, the business practices used and the technical approaches employed to comply with COPPA fail to protect children's online privacy effectively. In this paper, we describe the design of an automated tool for protecting children's online privacy, called POCKET (Parental Online Consent for Kid's Electronic Transactions). The POCKET framework is a novel, technically feasible and legally sound solution to automatically enforce COPPA.
In decentralized cognitive radio (CR) networks, establishing a link between a pair of communicating nodes requires that the radios “rendezvous” in a common channel—such a channel is called a rendezvous channel—to exchange control information. When unlicensed (secondary) users opportunistically share spectrum with licensed (primary or incumbent) users, a given rendezvous channel may become unavailable due to the appearance of licensed user signals. Ideally, every node pair should be able to rendezvous in every available channel (i.e., maximize the rendezvous diversity) so that the possibility of rendezvous failures is minimized. Channel hopping (CH) protocols have been proposed previously for establishing pairwise rendezvous. Some of them enable pairwise rendezvous over all channels but require global clock synchronization, which may be very difficult to achieve in decentralized networks. Maximizing the pairwise rendezvous diversity in decentralized CR networks is a very challenging problem. In this paper, we present a systematic approach for designing CH protocols that maximize the rendezvous diversity of any node pair in decentralized CR networks. The resulting protocols are resistant to rendezvous failures caused by the appearance of primary user (PU) signals and do not require clock synchronization. The proposed approach, called asynchronous channel hopping (ACH), has two noteworthy features: 1) any pair of CH nodes are able to rendezvous on every channel so that the rendezvous process is robust to disruptions caused by the appearance of PU signals; and 2) an upper bounded time-to-rendezvous (TTR) is guaranteed between the two nodes even if their clocks are asynchronous. We propose two optimal ACH designs that maximize the rendezvous diversity between any pair of nodes and show their rendezvous performance via analytical and simulation results.
Cognitive radio (CR) technology enables multiple wireless networks operating in overlapping regions to opportunistically access fallow spectrum from a common pool of spectrum. This spectrum access paradigm — referred to herein as simply spectrum sharing — holds the promise of significantly greater efficiency in spectrum utilization and alleviating the spectrum shortage problem. CRs have garnered great attention from the research community, and many security and privacy problems relevant to CR networks are being studied actively at this time. However, selfish misbehaviors that can occur in the spectrum contention process have received little attention. In this article we discuss two types of selfish misbehaviors in the context of spectrum contention: selfish spectrum contention and selfish channel negotiation. These misbehaviors deteriorate the fairness and performance of spectrum sharing mechanisms in both infrastructure-based and multi-hop CR networks. We also discuss countermeasures against these threats as well as the technical challenges that must be overcome to implement such countermeasures.
Due to delay and energy constraints, a cognitive radio may not be able to perform spectrum sensing in all available channels. Therefore, a sensing policy is needed to decide which channels to sense. The channel selection problem is the problem of designing such a sensing policy to maximize throughput while avoiding interference to primary users. The channel selection problem can be formulated as a reinforcement learning problem. Channel selection schemes that employ reinforcement machine learning algorithms are vulnerable to belief manipulation attacks that contaminate the knowledge base of the learning algorithms. In this paper, we analyze the security of channel selection algorithms that are based on reinforcement learning and propose mitigation techniques that make these algorithms more robust against belief manipulation attacks.
To maximize their efficacy, cognitive radios (CRs) need to be able to cope with the constantly changing spectrum environment, evolving spectrum access policies, and a diverse array of network application requirements. Policy-based cognitive radios address these challenges by decoupling the spectrum access policies from device-specific implementations and optimizations. These radios can invoke situation-appropriate, adaptive actions based on policy specifications and the current spectrum environment. A policy-based CR has a reasoning engine called a policy reasoner. The primary task carried out by the policy reasoner is evaluating the transmission requests with respect to the spectrum policies. In this paper, we describe the design of a policy reasoner that processes ontology-based spectrum policies. The main advantage of using ontology-based policies is that the policy reasoner can understand and process any spectrum policies authored by any organization by relying on the spectrum ontologies. In our implementation, the spectrum ontology defines the various dynamic spectrum access (DSA) concepts, models the domain of DSA networks in a machine-understandable manner, and uses SWRL (Semantic Web Rule Language) rules to represent spectrum policies. Unfortunately, ontological reasoning needed to process ontology-based spectrum policies incurs greater computation overhead compared to non-ontological reasoning. This drawback can be a critical one as it can impede a CR from meeting its real-time performance requirements. We have carried out a number of experiments, using our implementation, to evaluate whether a radio controlled by ontology-based policies can meet its real-time performance requirements. Based on our experimental results, we propose a set of guidelines for the design of ontology-based spectrum access policies.
Distributed Spectrum Sensing (DSS) enables a Cognitive Radio (CR) network to reliably detect licensed users and avoid causing interference to licensed communications. The data fusion technique is a key component of DSS. We discuss the Byzantine Failure problem in the context of data fusion, which may be caused by either malfunctioning sensing terminals or Spectrum Sensing Data Falsification (SSDF) attacks. In either case, incorrect spectrum sensing data is reported to a data collector which can lead to the distortion of data fusion outputs. We investigate various data fusion techniques, focusing on their robustness against Byzantine Failures. In contrast to existing data fusion techniques that use a fixed number of samples, we propose a new technique that uses a variable number of samples. The proposed technique, which we call Weighted Sequential Probability Ratio Test (WSPRT), introduces a reputation-based mechanism to the Sequential Probability Ratio Test (SPRT). We evaluate WSPRT by comparing it with a variety of data fusion techniques under various conditions. We also discuss practical issues that need to be considered when applying the fusion techniques to CR networks. Our simulation results indicate that WSPRT is the most robust against Byzantine Failures among the data fusion techniques that were considered.
Cognitive radio (CR) is one of the key enabling technologies for opportunistic spectrum sharing. In such a spectrum sharing paradigm, radios access spectrum opportunistically by identifying the under-utilized spectrum and then transmitting waveforms in that spectrum that are compliant to relevant spectrum access policies. Implementing such a flexible scheme requires changes in the current static spectrum management approach. As a result, declarative spectrum management through policy-based dynamic spectrum access has garnered significant attention recently. Policy-based spectrum access decouples spectrum access policies and Policy Processing Components from the radio platform. The Policies define conditions under which the radios are allowed to transmit in terms of frequencies used, geographic locations, time etc. The Policy Processing Components include a reasoning engine called the Policy Reasoner, which is responsible for enforcement, analysis and processing of the policies, as well as resolving policy conflicts. This paper describes the design and implementation of a novel policy reasoner that processes spectrum access policies efficiently by reframing the policy reasoning problem as a graph-based Boolean function manipulation problem. The proposed policy reasoner has the capability to respond to either under-specified or invalid transmission requests (sent by the system strategy reasoner) by returning multiple sets of opportunity constraints that prescribe different ways of modifying transmission parameters in order to make them policy compliant.
With the development of dynamic spectrum access technologies, such as cognitive radio, the secondary use of underutilized TV broadcast spectrum has come a step closer to reality. Recently, a number of wireless standards that incorporate CR technology have been finalized or are being developed to standardize systems that will coexist in the same TV white spaces. In these wireless standards, the widely studied problem of primary-secondary network coexistence has been addressed by the use of incumbent geolocation databases augmented with spectrum sensing techniques. However, the challenging problem of secondary-secondary coexistence—in particular, heterogeneous secondary coexistence—has garnered much less attention in the standards and related literature. The coexistence of heterogeneous secondary networks poses challenging problems due to a number of factors, including the disparity of PHY/MAC strategies of the coexisting systems. In this article, we discuss the mechanisms that have been proposed for heterogeneous coexistence, and propose a taxonomy of those mechanisms targeting TVWSs. Through this taxonomy, our aim is to offer a clear picture of the heterogeneous coexistence issues and related technical challenges, and shed light on the possible solution space.
This paper provides an overview of how our access to the electromagnetic spectrum has evolved and will continue to expand over time. We first focus on the historical origins of technological and regulatory choices, and provide some insight into how these choices have impacted the efficiency with which we currently utilize the spectrum, and how we can better use it in the future. In turn, we summarize the relevant technologies being discussed in today’s standardization and research and development efforts. Finally, we provide a vision for the evolution of spectrum access technologies that, intertwined with progressive regulatory and economic policies, will enable flexible and secure sharing of spectrum to deliver seamless mobility with ubiquitous service for users worldwide.
Recent advances in cognitive radio (CR) technology have brought about a number of wireless standards that support opportunistic access to available white-space spectrum. Addressing the self-coexistence of CR networks in such an environment is very challenging, especially when coexisting networks operate in the same swath of spectrum with little or no direct coordination. In this paper, we study the problem of co-channel self-coexistence of uncoordinated CR networks that employ orthogonal frequency division multiple access (OFDMA) in the uplink. We frame the self-coexistence problem as a non-cooperative game, and propose an uplink soft frequency reuse (USFR) technique to enable globally power-efficient and locally fair sharing of white-space spectrum. In each network, uplink resource allocation is decoupled into two subproblems: subchannel allocation (SCA) and transmit power control (TPC). We provide a unique optimal solution to the TPC subproblem, and present a low-complexity heuristic for the SCA subproblem. Furthermore, we frame the TPC and SCA games, and integrate them as a heuristic algorithm that achieves the Nash equilibrium in a fully distributed manner. Our simulation results show that the proposed USFR technique significantly improves self-coexistence in several aspects, including spectrum utilization, power consumption, and intra-cell fairness.
In cognitive radio (CR) networks, licensed spectrum that can be shared by secondary users (SUs) is always restricted by the needs of primary users (PUs). Although channel aggregation (CA) can enable each SU to utilize multiple channels at a time, whether it is beneficial is actually subject to PU activity and radio capability. In this paper, we study the efficiency of CA in consideration of various such constraints and costs in practice. First, we propose a new channel usage model to analyze the impact of both PU and SU behaviors on the availability of white spaces (WS’s). This model is very general and thus can capture a wide range of user behaviors. Next, we model the delay costs for performing CA. User demands in both frequency and time domains are considered to evaluate the costs for making negotiation and renewing transmission. Further, an optimal CA strategy is defined to minimize the cumulative delay for transmitting certain data. Numerical and simulation results based on real data of PU activity show that user demands on both aggregated bandwidth and service duration should be carefully chosen in reality.
Expanded into two volumes, the Second Edition of Springer’s Encyclopedia of Cryptography and Security brings the latest and most comprehensive coverage of the topic: Definitive information on cryptography and information security from highly regarded researchers Effective tool for professionals in many fields and researchers of all levels Extensive resource with more than 700 contributions in Second Edition 5643 references, more than twice the number of references that appear in the First Edition With over 300 new entries, appearing in an A-Z format, the Encyclopedia of Cryptography and Security provides easy, intuitive access to information on all aspects of cryptography and security. As a critical enhancement to the First Edition’s base of 464 entries, the information in the Encyclopedia is relevant for researchers and professionals alike. Topics for this comprehensive reference were elected, written, and peer-reviewed by a pool of distinguished researchers in the field. The Second Edition’s editorial board now includes 34 scholars, which was expanded from 18 members in the First Edition. Representing the work of researchers from over 30 countries, the Encyclopedia is broad in scope, covering everything from authentication and identification to quantum cryptography and web security. The text’s practical style is instructional, yet fosters investigation. Each area presents concepts, designs, and specific implementations. The highly-structured essays in this work include synonyms, a definition and discussion of the topic, bibliographies, and links to related literature.
This book is available at Ebook.com.
In decentralized cognitive radio (CR) networks, enabling the radios to establish a control channel (i.e., "rendezvous" to establish a link) is a challenging problem. The use of a dedicated common control channel simplifies the rendezvous process but may not be feasible in many opportunistic spectrum sharing scenarios due to the dynamically changing availability of all the channels, including the control channel. To address this problem, researchers have proposed the use of channel hopping protocols for enabling rendezvous in CR networks. Most, if not all, of the existing channel hopping schemes only provide ad hoc approaches for generating channel hopping sequences and evaluating their properties. In this paper, we present a systematic approach, based on quorum systems, for designing and analyzing channel hopping protocols for the purpose of control channel establishment. The proposed approach, called Quorum-based Channel Hopping (QCH) system, can be used for implementing rendezvous protocols in CR networks that are robust against link breakage caused by the appearance of incumbent user signals. We describe two synchronous QCH systems under the assumption of global clock synchronization, and two asynchronous channel hopping systems that do not require global clock synchronization. Our analytical and simulation results show that the proposed channel hopping schemes outperform existing schemes under various network conditions.
In Cognitive Radio (CR) networks, establishing a link between a pair of communicating nodes requires that their radios are able to “rendezvous” on a common channel (a.k.a. a rendezvous channel). When unlicensed (secondary) users opportunistically share spectrum with licensed (primary or incumbent) users, a given rendezvous channel may become unavailable due to the appearance of licensed user signals, which makes rendezvous impossible. Ideally, any node pair should be able to rendezvous over every available channel to minimize the possibility of such rendezvous failures. Channel hopping (CH) protocols have been proposed previously for establishing pairwise rendezvous. Some of them enable pairwise rendezvous over all channels but require global clock synchronization, which is very difficult to achieve in decentralized networks. In this paper, we present a systematic approach, called asynchronous channel hopping (ACH), for designing CH-based rendezvous protocols for decentralized CR networks. The resulting protocols are resistant to rendezvous failures caused by the appearance of primary user signals and do not require clock synchronization. We propose an optimal ACH design that maximizes the rendezvous probability between any pair of nodes, and show its rendezvous performance via simulation results.
Declarative spectrum management through policy based dynamic spectrum access has garnered significant attention recently. Policy-based spectrum access decouples spectrum access policies from the radio platform. In policy-based spectrum access, a reasoning engine called the policy reasoner plays a critical role. The policy reasoner assists in policy enforcement and carries out a number of tasks related to policy analysis and processing. One of the most important tasks performed by the policy reasoner is evaluating transmission requests in the context of the currently active set of policies. This paper describes the design and implementation of a novel policy reasoner. The proposed policy reasoner uses multi-terminal binary decision diagrams (MTBDDs) to represent, interpret, and process policies. It uses a set of efficient graph-theoretic algorithms to translate policies into MTBDDs, merge policies into a single meta-policy, and compute opportunity constraints. In this paper, we demonstrate that policies can be processed efficiently by reframing the policy reasoning problem as a graph-based Boolean function manipulation problem. The proposed policy reasoner has the capability to respond to either under-specified or invalid transmission requests (sent by the system strategy reasoner) by returning a set of opportunity constraints that prescribes how the transmission parameters should be modified in order to make them conform to the policies. We propose three different algorithms for computing the opportunity constraints. The first algorithm computes opportunity constraints for under-specified transmission requests and its complexity is proportional to the number of variables in the meta-policy BDD. The second and third algorithms compute opportunity constraints for invalid transmission requests and their complexities are proportional to the number of variables and the size of the meta-policy BDD, respectively.
In the context of IEEE 802.22 networks, the hidden incumbent problem refers to a situation in which a consumer premise equipment (CPE) is within the protection region of an operating incumbent but it fails to report the existence of the incumbent to its base station (BS). In such a scenario, CPEs within the incumbent’s transmission range may not be able to decode the BS signal because of the strong interference from the incumbent signal. Moreover, the CPEs cannot report the existence of the incumbent as their transmission will cause interference to the incumbent. Therefore, these CPEs are unable to report the existence of the incumbent to the BS, and hence the BS fails to detect the presence of the incumbent. To address this problem, IEEE 802.22 prescribes that the BS broadcasts explicit outband control signals on a set of candidate channels and CPEs search for control signals on those candidate channels so that they can send the hidden incumbent detection messages to the BS via one of the candidate channels. In this paper, we present a systematic way of designating the candidate channel sets for the hidden incumbent detection (HID) protocol in 802.22 networks. The proposed approach has two noteworthy features: (1) it allows the BS and the CPE to choose different sets of candidate channels in a distributed manner without message exchanges; and (2) it significantly reduces the size of the set of candidate channels for each 802.22 entity, thus lowering the control overhead of the HID protocol.
Policy-based dynamic spectrum access is one of the spectrum access models being considered by regulators and researchers for regulating the behavior of cognitive radios. This approach to spectrum access decouples the policyrelated components (i.e., policy management, provisioning, and reasoning) from the radio platform. In policy-based spectrum access, the policy reasoner plays a critical role—it assists in policy enforcement and carries out a number of tasks related to policy analysis and processing. One of the most crucial tasks performed by the policy reasoner is evaluating radio transmission requests in relation to a set of active policies. This paper describes the design process and architecture of a policy reasoner. Key features of the proposed policy reasoner include: (1) policy conflict detection and resolution; and (2) ability to process underspecified transmission requests and compute the corresponding constraints.
This book gives comprehensive and balanced coverage of the principles of cognitive radio communications, cognitive networks, and details of their implementation, including the latest developments in the standards and spectrum policy. Case studies, end-of-chapter questions, and descriptions of various platforms and test beds, together with sample code, give hands-on knowledge of how cognitive radio systems can be implemented in practice. Extensive treatment is given to several standards, including IEEE 802.22 for TV White Spaces and IEEE SCC41. Written by leading people in the field, both at universities and major industrial research laboratories, this tutorial text gives communications engineers, R&D engineers, researchers, undergraduate and post graduate students a complete reference on the application of wireless communications and network theory for the design and implementation of cognitive radio systems and networks.
Each chapter is written by internationally renowned experts, giving complete and balanced treatment of the fundamentals of both cognitive radio communications and cognitive networks, together with implementation details Extensive treatment of the latest standards and spectrum policy developments enables the development of compliant cognitive systems Strong practical orientation - through case studies and descriptions of cognitive radio platforms and testbeds - shows how "real world" cognitive radio systems and network architectures have been built Additional materials, slides, solutions to end-of-chapter problems, and sample codes, are available at www.elsevierdirect.com/companions
Establishing a control channel for medium access control is a challenging problem in multi-channel and dynamic spectrum access (DSA) networks. In the design of multi-channel MAC protocols, the use of channel (or frequency) hopping techniques (a.k.a. parallel rendezvous) have been proposed to avoid the bottleneck of a single control channel. In DSA networks, the dynamic and opportunistic use of the available spectrum requires that the radios are able to "rendezvous" i.e., to find each other to establish a link. The use of a dedicated global control channel simplies the rendezvous process but may not be feasible in many opportunistic spectrum sharing scenarios due to the dynamically changing availability of all the channels, including the control channel. To address this problem, researchers have proposed the use of channel hopping protocols for enabling rendezvous in DSA networks. This paper presents a systematic approach, based on quorum systems, for designing and analyzing channel hopping protocols for the purpose of control channel establishment. The proposed approach, called Quorum-based Channel Hopping (QCH) system, can be used for implementing rendezvous protocols in DSA networks that are robust against link breakage caused by the appearance of incumbent user signals. We describe two optimal QCH systems under the assumption of global clock synchronization: the first system is optimal in the sense that it minimizes the time-to-rendezvous between any two channel hopping sequences; the second system is optimal in the sense that it guarantees the even distribution of the rendezvous points in terms of both time and channel, thus solving the rendezvous convergence problem. We also propose an asynchronous QCH system that does not require global clock synchronization. Our analytical and simulation results show that the channel hopping schemes designed using our framework outperform existing schemes under various network conditions.
IEEE 802.22 is the first wireless standard based on cognitive radio (CR) technology. It defines the air interface for a wireless regional area network (WRAN) that uses fallow segments of the TV broadcast bands. CR technology enables unlicensed users in WRANs to utilize licensed (incumbent) spectrum bands on a non-interference basis to incumbent users. The coexistence between incumbent users and unlicensed users is referred to as incumbent coexistence. On the other hand, the coexistence between unlicensed users in different WRAN cells is referred to as self-coexistence. 802.22 defines several inter-base station (BS) dynamic resource sharing mechanisms to enable overlapping cells to share spectrum. However, those mechanisms do not adequately address some of the key issues concerning incumbent and self coexistence. In this paper, we propose an inter-BS Coexistence- Aware Spectrum Sharing (CASS) protocol for overlapping 802.22 cells that takes into account coexistence requirements. We show that the proposed protocol outperforms 802.22’s self-coexistence solutions using simulation results. To the best of our knowledge, the work presented here is the first systematic study of the self coexistence problem in the context of 802.22 WRANs.
The security of software defined radio (SDR) software is essential to the trustworthiness of the overall radio system. When designing and developing security solutions for SDR software, its performance requirements, such as stringent real-time constraint, need to be considered. In this paper, we describe a tamper resistance scheme that was designed to thwart the unauthorized tampering of SDR software. This scheme utilizes code encryption and branch functions to obfuscate the target program while enabling the program to satisfy its performance requirements. The scheme employs a technique called the Random Branch Function Call (RBFC), which enables a user to control the trade off between integrity checking frequency and the overhead. We have rigorously evaluated the scheme using various performance metrics and quantified the relationship between the end-to-end delay overhead (caused by the tamper resistance scheme) and voice quality in the context of a voice communication network.
Phishing is an attempt to fraudulently acquire userspsila sensitive information, such as passwords or financial information, by masquerading as a trustworthy entity in online transactions. Recently, a number of researchers have proposed using external online resources like the Google Page Rank system to assist phishing detection. The advantage of such an approach is that the detection capability will gradually evolve and improve as the online resources become more sophisticated and manipulation-resistant. In this paper, we evaluate the effectiveness of three popular online resources in detecting phishing sites-viz, Google Page Rank system, Yahoo! In link data, and Yahoo! directory service. Our results indicate that these online resources can be used to increase the accuracy of phishing site detection when used in conjunction with existing phishing countermeasures. The proposed approach involves examining the following three attributes of a target site (site being examined):
The aforementioned online resources by themselves are insufficient to address the phishing attack problem. We provide discussions on how each of those resources may be integrated with existing phishing detection techniques to provide a more effective solution.
The dual goal of the "Handbook in Information Systems" is to provide a reference for the diversity of research in the field by scholars from many disciplines, as well as to stimulate new research. This volume, focusing on Information Assurance, Security and Privacy Services, consists of six sections. In the first part contributors discuss Program Security, Data Security and Authentication, while the second section covers Internet Scourges and Web Security. Parts two and three concentrate on Usable Security and Human-Centric Aspects, along with Security, Privacy and Access Control whereas the final sections of the book examine Economic Aspects of Security, and Threat Modeling, Intrusion and Response. This book is available at the Emerald Bookstore
More than a dozen Wireless @ Virginia Tech faculty are working to address the broad research agenda of cognitive radio and cognitive networks. Our core research team spans the protocol stack from radio and reconfigurable hardware to communications theory to the networking layer. Our work includes new analysis methods and the development of new software architectures and applications, in addition to work on the core concepts and architectures underlying cognitive radios and cognitive networks. This paper describes these contributions and points towards critical future work that remains to fulfill the promise of cognitive radio. We briefly describe the history of work on cognitive radios and networks at Virginia Tech and then discuss our contributions to the core cognitive processing underlying these systems, focusing on our cognitive engine. We also describe developments that support the cognitive engine and advances in radio technology that provide the flexibility desired in a cognitive radio node. We consider securing and verifying cognitive systems and examine the challenges of expanding the cognitive paradigm up the protocol stack to optimize end-to-end network performance. Lastly, we consider the analysis of cognitive systems using game theory and the application of cognitive techniques to problems in dynamic spectrum sharing and control of multiple-input multiple-output radios.
In December 2006 the online Web site Xanga.com was fined $1 million for failing to protect children's privacy as required under the Children's Online Privacy Protection Act (COPPA).1 The Federal Trade Commission (FTC) estimated that 1.7 million accounts were created by underaged children without their parent's knowledge or consent.2 Although the site asked for a person's age before completing registration, warning those under thirteen that they could not participate, nevertheless the system allowed those who subsequently entered birthdates indicating that they were under thirteen to simply continue the process of registration and to access and post information on the site.3 Xanga also collected information from the children, including name, address, cell phone number, and instant messenger identification, which they posted in the child's online profile. The potential danger to young children was that this personally identifiable physical information was easily available online; the social networking site design encouraged communication and personal contact between registered users. Children could post profiles, pictures, and videos as well as communicate directly with other users.4 The FTC fine against Xanga was the largest ever imposed under COPPA; the settlement of the complaint required Xanga to pay a $1 million fine, implement policies compliant with COPPA, file additional status reports, and submit to monitoring by the FTC.5
Cognitive Radio (CR) is seen as one of the enabling technologies for realizing a new spectrum access paradigm, viz. Opportunistic Spectrum Sharing (OSS). IEEE 802.22 is the world's first wireless standard based on CR technology. It defines the air interface for a wireless regional area network (WRAN) that uses fallow segments of the licensed (incumbent) TV broadcast bands. CR technology enables unlicensed (secondary) users in WRANs to utilize licensed spectrum bands on a non-interference basis to incumbent users. The coexistence between incumbent users and secondary users is referred to as incumbent coexistence. On the other hand, the coexistence between secondary users in different WRAN cells is referred to as self-coexistence. The 802.22 draft standard prescribes several mechanisms for addressing incumbent- and self-coexistence issues. In this paper, we describe how adversaries can exploit or undermine such mechanisms to degrade the performance of 802.22 WRANs and increase the likelihood of those networks interfering with incumbent networks. The standard includes a security sublayer to provide subscribers with privacy, authentication, and confidentiality. Our investigation, however, revealed that the security sublayer falls short of addressing all of the key security threats. We also discuss countermeasures that may be able to address those threats.
Children’s privacy has become critical with the increasing use of the Internet for commercial purposes and corresponding increase in requests for information. 65% of children between the ages of 10 and 13 use the Internet for casual web surfing, chatting, games, schoolwork, e-mail, interactive learning, and other applications. Often, websites hosting these activities ask for personal information such as name, e-mail, street address, and phone number. The Children’s Online Privacy Protection Act (COPPA) of 1998 was enacted in reaction to the widespread collection of information from children and subsequent abuses identified by the Federal Trade Commission (FTC). COPPA is aimed at protecting a child’s privacy by requiring parental consent before collecting information from children under 13. In this paper, we describe an automated tool for protecting child privacy called Parental Online Consent for Kids Electronic Transaction (or POCKET). The POCKET framework is a novel, technically feasible and legally sound solution to automatically enforce COPPA. Parents answer a simple questionnaire regarding their privacy requirements and the POCKET user agent automatically converts it into a privacy preferences file. These preferences are enforced when a child uses the Internet. Only websites that adhere to the preferences can receive the child’s information, while websites whose policies do not match are blocked. A merchant-specific privacy information package and a signed digital agreement are uploaded to the qualified merchant from the client (child’s machine). POCKET framework incorporates a secure handshake protocol to protect the data exchange between the client and the merchant. A local log file created by POCKET and the digital agreement are used to enforce merchant accountability.
Children’s privacy has become critical with the increasing use of the Internet for commercial pur formation. 65% of children between the ages of 10 and 13 use the Internet for casual web surfing, chatting, games, schoolwork, e-mail, interactive learning, and other applications. Often, websites hosting these activities ask for personal information such as name, e-mail, street address, and phone number. The Children’s Online Privacy Protection Act (COPPA) of 1998 was enacted in reaction to the widespread col- election of information from children and subsequent abuses identified by the Federal Trade Commission (FTC). COPPA is aimed at protecting a child’s pri- vacy by requiring parental consent before collecting information from children under 13. In this paper, we describe an automated tool for protecting child privacy called Parental Online Consent for Kids Electronic Transaction (or POCKET). The POCKET framework is a novel, technically feasible and legally sound solution to automatically enforce COPPA. Parents answer a simple questionnaire regarding their privacy requirements and the POCKET user agent automatically converts it into a privacy preferences file. These preferences are enforced when a child uses the Internet. Only websites that adhere to the preferences can receive the child’s infor- mation, while websites whose policies do not match are blocked. A merchant-specific privacy information package and a signed digital agreement are uploaded to the qualified merchant from the client (child’s machine). POCKET framework incorporates a secure handshake protocol to protect the data exchange between the client and the merchant. A local log file created by POCKET and the digital agreement are used to enforce merchant accountability.
Distributed spectrum sensing (DSS) enables a Cognitive Radio (CR) network to reliably detect licensed users and avoid causing interference to licensed communications. The data fusion technique is a key component of DSS. We discuss the Byzantine failure problem in the context of data fusion, which may be caused by either malfunctioning sensing terminals or Spectrum Sensing Data Falsification (SSDF) attacks. In either case, incorrect spectrum sensing data will be reported to a data collector which can lead to the distortion of data fusion outputs. We investigate various data fusion techniques, focusing on their robustness against Byzantine failures. In contrast to existing data fusion techniques that use a fixed number of samples, we propose a new technique that uses a variable number of samples. The proposed technique, which we call Weighted Sequential Probability Ratio Test (WSPRT), introduces a reputation-based mechanism to the Sequential Probability Ratio Test (SPRT). We evaluate WSPRT by comparing it with a variety of data fusion techniques under various network operating conditions. Our simulation results indicate that WSPRT is the most robust against the Byzantine failure problem among the data fusion techniques that were considered.
Although a substantial amount of research has examined the constructs of warmth and competence, far less has examined how these constructs develop and what benefits may accrue when warmth and competence are cultivated. Yet there are positive consequences, both emotional and behavioral, that are likely to occur when brands hold perceptions of both. In this paper, we shed light on when and how warmth and competence are jointly promoted in brands, and why these reputations matter.
Cognitive radio (CR) is a promising technology that can alleviate the spectrum shortage problem by enabling unlicensed users equipped with CRs to coexist with incumbent users in licensed spectrum bands while causing no interference to incumbent communications. Spectrum sensing is one of the essential mechanisms of CRs and its operational aspects are being investigated actively. However, the security aspects of spectrum sensing have garnered little attention. In this paper, we identify a threat to spectrum sensing, which we call the primary user emulation (PUE) attack. In this attack, an adversary's CR transmits signals whose characteristics emulate those of incumbent signals. The highly flexible, software-based air interface of CRs makes such an attack possible. Our investigation shows that a PUE attack can severely interfere with the spectrum sensing process and significantly reduce the channel resources available to legitimate unlicensed users. To counter this threat, we propose a transmitter verification scheme, called LocDef (localization-based defense), which verifies whether a given signal is that of an incumbent transmitter by estimating its location and observing its signal characteristics. To estimate the location of the signal transmitter, LocDef employs a non-interactive localization scheme. Our security analysis and simulation results suggest that LocDef is effective in identifying PUE attacks under certain conditions.
With the ever increasing deployment and usage of gigabit networks, traditional network anomaly detection based Intrusion Detection Systems (IDS) have not scaled accordingly. Most, if not all IDS assume the availability of complete and clean audit data. We contend that this assumption is not valid. Factors like noise, mobility of the nodes and the large amount of network traffic make it difficult to build a traffic profile of the network that is complete and immaculate for the purpose of anomaly detection. In this paper, we attempt to address these issues by presenting an anomaly detection scheme, called SCAN (Stochastic Clustering Algorithm for Network Anomaly Detection), that has the capability to detect intrusions with high accuracy even with incomplete audit data. To address the threats posed by network-based denial-of-service attacks in high speed networks, SCAN consists of two modules: an anomaly detection module that is at the core of the design and an adaptive packet sampling scheme that intelligently samples packets to aid the anomaly detection module. The noteworthy features of SCAN include: (a) it intelligently samples the incoming network traffic to decrease the amount of audit data being sampled while retaining the intrinsic characteristics of the network traffic itself; (b) it computes the missing elements of the sampled audit data by utilizing an improved expectation–maximization (EM) algorithm-based clustering algorithm; and (c) it improves the speed of convergence of the clustering process by employing Bloom filters and data summaries.
In the opportunistic spectrum sharing (OSS) paradigm, unlicensed users (a.k.a. secondary users) “opportunistically” operate in fallow licensed spectrum on a non-interference basis to licensed users (a.k.a. primary users). Each secondary user is equipped with a cognitive radio (CR) that has the capability to selectively operate in fallow licensed bands. In the OSS paradigm, the temporal and spatial spectrum variability caused by the primary users’ spectrum utilization adds another dimension of complexity to the problem of channel assignment. Because existing channel assignment approaches (which were originally designed for conventional wireless networks)—such as link-based and flowbased approaches—do not consider spectrum variability, they do not offer the best trade-off in terms of complexity and performance. In this paper, we investigate the channel assignment problem in single radio interface, CR ad hoc networks. We present a novel channel assignment scheme that assigns channels at the granularity of segments. The proposed scheme is significantly simpler than existing approaches, and offers several practical advantages. Using simulations results, we show that the proposed segment-based channel assignment strategy outperforms link-based channel assignment under realistic network conditions.
More than a dozen Wireless @ Virginia Tech faculty are working to address the broad research agenda of cognitive radio and cognitive networks. Our core research team spans the protocol stack from radio and reconfigurable hardware to communications theory to the networking layer. Our work includes new analysis methods and the development of new software architectures and applications, in addition to work on the core concepts and architectures underlying cognitive radios and cognitive networks. This paper describes these contributions and points towards critical future work that remains to fulfill the promise of cognitive radio. We briefly describe the history of work on cognitive radios and networks at Virginia Tech and then discuss our contributions to the core cognitive processing underlying these systems, focusing on our cognitive engine. We also describe developments that support the cognitive engine and advances in radio technology that provide the flexibility desired in a cognitive radio node. We consider securing and verifying cognitive systems and examine the challenges of expanding the cognitive paradigm up the protocol stack to optimize end-to-end network performance. Lastly, we consider the analysis of cognitive systems using game theory and the application of cognitive techniques to problems in dynamic spectrum sharing and control of multiple- input multiple-output radios.
As children increasingly use the Internet, there have been mounting concerns about their privacy online. As a result, the U.S. Congress enacted the Children’s Online Privacy Protection Act (COPPA) to prohibit websites from collecting information from children under 13 years of age without verifiable parental consent. Unfortunately, few technologies are available for parents to provide this consent. Further, few parents are aware of the laws and technologies available. This research explored parental awareness of laws and technologies associated with protecting children’s privacy online, and usage of technologies and techniques for parental control, using focus group research. The results of the study are used to propose an emergent framework of factors that will impact use of privacy protection tools and techniques by parents.
Large-scale wireless sensor networks (WSNs) are highly vulnerable to attacks because they consist of numerous resource-constrained devices and communicate via wireless links. These vulnerabilities are exacerbated when WSNs have to operate unattended in a hostile environment, such as battlefields. In such an environment, an adversary poses a physical threat to all the sensor nodes, that is, an adversary may capture any node compromising critical security data including keys used for confidentiality and authentication. Consequently, it is necessary to provide security services to these networks to ensure their survival. We propose a novel self-organizing key management scheme for large-scale, and long-lived WSNs, called Survivable and Efficient Clustered Keying (SECK) that provides administrative services that ensures the survivability of the network. SECK is suitable for managing keys in a hierarchical WSN consisting of low-end sensor nodes clustered around more capable gateway nodes. Using cluster-based administrative keys, SECK provides five efficient security administration mechanisms: (1) clustering and key setup, (2) node addition, (3) key renewal, (4) recovery from multiple node captures, and (5) re-clustering. All of these mechanisms have been shown to localize the impact of attacks and considerably improve the efficiency of maintaining fresh session keys. Using simulation and analysis, we show that SECK is highly robust against node capture and key compromise while incurring low communication and storage overhead.
The design of the most commonly-used Internet and Local Area Network protocols provide no way of verifying the sender of a packet is who it claims to be. A malicious host can easily launch an attack while pretending to be another host to avoid being discovered. To determine the identity ofan attacker, an administrator can use traceback, a technique that determines the path of attack packets from the victim to the coordinator. Most traceback research has focused on IP and Stepping-Stone techniques and little has been conducted on the problem of Data-Link Layer Traceback (DLT), the process of tracing frames from the network edge to the attack source. We propose a scheme called Tagged-fRAme tracebaCK (TRACK) that provides a secure, reliable DLT technique for Ethernet networks. TRACK defines processes for Ethernet switches and a centralized storage and lookup host. Simulation results indicate that TRACK provides accurate DLT operation while causing minimal impact on network and application performance.
As advances in networking technology help to connect the distant corners of the globe and as the Internet continues to expand its influence as a medium for communications and commerce, the threat from spammers, attackers and criminal enterprises has also grown accordingly. It is the prevalence of such threats that has made intrusion detection systems—the cyberspace’s equivalent to the burglar alarm—join ranks with firewalls as one of the fundamental technologies for network security. However, today’s commercially available intrusion detection systems are predominantly signature-based intrusion detection systems that are designed to detect known attacks by utilizing the signatures of those attacks. Such systems require frequent rule-base updates and signature updates, and are not capable of detecting unknown attacks. In contrast, anomaly detection systems, a subset of intrusion detection systems, model the normal system/network behavior which enables them to be extremely effective in finding and foiling both known as well as unknown or ‘‘zero day’’ attacks. While anomaly detection systems are attractive conceptually, a host of technological problems need to be overcome before they can be widely adopted. These problems include: high false alarm rate, failure to scale to gigabit speeds, etc. In this paper, we provide a comprehensive survey of anomaly detection systems and hybrid intrusion detection systems of the recent past and present. We also discuss recent technological trends in anomaly detection and identify open problems and challenges in this area.
Attack mitigation schemes actively throttle attack traffic generated in Distributed Denial-of-Service (DDoS) attacks. This paper presents Attack Diagnosis (AD), a novel attack mitigation scheme that adopts a divide-and-conquer strategy. AD combines the concepts of Pushback and packet marking, and its architecture is in line with the ideal DDoS attack countermeasure paradigm—attack detection is performed near the victim host and packet filtering is executed close to the attack sources. AD is a reactive defense mechanism that is activated by a victim host after an attack is detected. By instructing its upstream routers to mark packets deterministically, the victim can trace back one attack source and command an AD-enabled router close to the source to filter the attack packets. This process isolates one attacker and throttles it, which is repeated until the attack is mitigated. We also propose an extension to AD called Parallel Attack Diagnosis (PAD) that is capable of throttling traffic coming from a large number of attackers simultaneously. AD and PAD are analyzed and evaluated using the Skitter Internet map, Lumeta’s Internet map, and the 6-degree complete tree topology model. Both schemes are shown to be robust against IP spoofing and to incur low false positive ratios.
Distributed Denial-of-Service (DDoS) attacks have become a major threat to the Internet. As a countermeasure against DDoS attacks, IP traceback schemes identify the network paths the attack traffic traverses. This paper presents a novel IP traceback scheme called Router Interface Marking (RIM). In RIM, a router probabilistically marks packets with a router interface’s identifier. After collecting the packets marked by each router in an attack path, a victim machine can use the information in the marked packets to trace back to the attack source. Different from most existing IP traceback schemes, RIM marks packets with the information of router interfaces rather than that of router IP addresses. This difference endows RIM with several advantageous features, including fast traceback speed, last-hop traceback capability, small computation overhead, low occurrence of false positives, and enhanced security.
We propose a secure routing architecture for Mobile Ad hoc NETworks (MANETs) called ThroughpUt-Feedback (TUF) routing, which is resilient against a wide range of routing disruption Denial-of-Service (DoS) attacks. Unlike many existing solutions, TUF does not focus on a particular type of attack, but instead takes an approach that is fundamentally more general. TUF is a cross-layer technique that detects attacks at the transport layer but responds to attacks at the network layer. Because most routing disruption attacks cause a significant drop in end-to-end goodput, monitoring the goodput of a route at the transport layer can detect abnormalities in the network (e.g., node or link failures, DoS attacks, etc.). Once an abnormal event is detected, a route rebuilding process is initiated at the network layer to find a new route. Using analysis and simulation results, we show that the TUF architecture is effective in thwarting a wide range of attacks, including protocol-compliant (also known as “JellyFish”) attacks.
Denial-of-Service (DoS) attacks pose a major threat to the availability of wireless ad hoc networks. Fault tolerant operation of wireless ad hoc networks will depend on the placement of DoS countermeasures in sufficiently robust form. In this paper, we describe a novel type of DoS attack called the Stasis Trap attack, and propose a technique for detecting such an attack. Stasis Trap attack has two distinguishing characteristics—it has a cross-layer design, and is stealthy. The Stasis Trap attack has a cross-layer design in that it is launched from the MAC layer but its aim is to degrade the end-to-end throughput of flows at the transport layer by exploiting TCP’s congestion-control mechanism. Specifically, an adversary launches a Stasis Trap attack against neighboring nodes by periodically preempting the wireless channel in order to cause large variations in the round trip time (RTT) of TCP flows. Channel preemptions are carried out by manipulating the back-off mechanism of the Distributed Coordinating Function of the 802.11 MAC protocol. The periodic preemptions induce large RTT variations in the TCP flows that are within the transmission range of the adversary. This in turn causes a significant drop in the throughput of those flows, thereby creating a “stasis trap” around the adversary that entangles TCP flows. The aforementioned attack severely degrades end-to-end throughput but has very little effect on MAC-layer throughput, and hence it is very hard to detect at the MAC layer, which is its point of attack. In this sense, this attack is stealthy. To detect the Stasis Trap attack, we propose a minimax robust decentralized detection framework with robust hypothesis testing.
There is an emerging need for the traffic processing capability of network security mechanisms, such as intrusion detection systems (IDS), to match the high throughput of today’s high-bandwidth networks. Recent research has shown that the vast majority of security solutions deployed today are inadequate for processing traffic at a sufficiently high rate to keep pace with the network’s bandwidth. To alleviate this problem, packet sampling schemes at the front end of network monitoring systems (such as an IDS) have been proposed. However, existing sampling algorithms are poorly suited for this task especially because they are unable to adapt to the trends in network traffic. Satisfying such a criterion requires a sampling algorithm to be capable of controlling its sampling rate to provide sufficient accuracy at minimal overhead. To meet this utopian goal, adaptive sampling algorithms have been proposed. In this paper, we put forth an adaptive sampling algorithm based on weighted least squares prediction. The proposed sampling algorithm is tailored to enhance the capability of network based IDS at detecting denialof- service (DoS) attacks. Not only does the algorithm adaptively reduce the volume of data that would be analyzed by an IDS, but it also maintains the intrinsic self-similar characteristic of network traffic. The latter characteristic of the algorithm can be used by an IDS to detect DoS attacks by using the fact that a change in the self-similarity of network traffic is a known indicator of a DoS attack.
This paper proposes an attack-resilient routing architecture, called cross-layer active re-routing (CARE), for mobile ad hoc networks (MANETs). Different from existing solutions, CARE does not focus on a particular type of attack, but instead takes a fundamentally general approach-it achieves resilience against a wide range of routing disruption Denial- of-Service (DoS) attacks by treating them and "dysfunctional" network events in the same way. Here, dysfunctional network events denote link and routing failures caused by link contention or node mobility. CARE is a cross-layer scheme that detects attacks at the transport layer but responds to them at the network layer. Because dysfunctional network events and routing disruption attacks have a pronounced effect on the size of the TCP congestion window, monitoring the window size is an effective method of detecting such events. Using this method, CARE is able to detect attacks. Once an attack is detected, CARE initiates a re-routing process to find a new route. For this purpose, a re-routing algorithm is proposed that circumvents the nodes that are likely to be misbehaving. Analysis and simulation results show that the CARE architecture is effective in thwarting a number of insider and protocol-compliant attacks. Our results indicate that CARE is also effective in improving network throughput in non-hostile environments because its proactive re-routing mechanism aids in maintaining a reasonable level of throughput when dysfunctional network events occur.
Cognitive Radio (CR) is a promising technology that can alleviate the spectrum shortage problem by enabling unlicensed users equipped with CRs to coexist with incumbent users in licensed spectrum bands without inducing interference to incumbent communications. Spectrum sensing is one of the essential mechanisms of CRs that has attracted great attention from researchers recently. Although the operational aspects of spectrum sensing are being investigated actively, its security aspects have garnered little attention. In this paper, we describe an attack that poses a great threat to spectrum sensing. In this attack, which is called the primary user emulation (PUE) attack, an adversary's CR transmits signals whose characteristics emulate those of incumbent signals. The highly flexible, software-based air interface of CRs makes such an attack possible. Our investigation shows that a PUE attack can severely interfere with the spectrum sensing process and significantly reduce the channel resources available to legitimate unlicensed users. As a way of countering this threat, we propose a transmitter verification procedure that can be integrated into the spectrum sensing mechanism. The transmitter verification procedure employs a location verification scheme to distinguish incumbent signals from unlicensed signals masquerading as incumbent signals. Two alternative techniques are proposed to realize location verification: Distance Ratio Test and Distance Difference Test. We provide simulation results of the two techniques as well as analyses of their security in the paper.
The Federal Communication Commission (FCC) regulates radio spectrum by applying regulatory paradigms. In the conventional spectrum management paradigm, a group of primary users is given license to operate exclusively in a specific band. Recent studies have shown that a new paradigm is needed to alleviate the spectrum shortage problem. In the new paradigm, licensed bands are opened up to unlicensed operations by secondary users on a non-interference basis to primary users. Cognitive radio (CR) technology is seen as the enabling technology for realizing this new paradigm. Emergency communication networks and military tactical networks of the future are expected to be built from multi-hop CR networks. In a typical MAC protocol designed for multi-hop CR networks, a node uses the common control channel to perform channel negotiations before data transmission. Recent research findings indicate that the common control channel is highly vulnerable to network attacks. In this paper, we examine MAC layer misbehaviors in multi-hop (ad-hoc) CR networks. First, we study the problem of control channel saturation attacks. This type of attack can cripple the channel assignment process. Second, we investigate selfish misbehaviors that exploit deficiencies in MAC protocols for CR networks. We use simulation data to evaluate the impact of such misbehaviors in terms of network availability and fairness.
Sensor networks differ from traditional networks in many aspects including their limited energy, memory space, and computational capability. These differentiators create unique security vulnerabilities. Security in Sensor Networks covers all aspects of the subject, serving as an invaluable reference for researchers, educators, and practitioners in the field. Containing thirteen invited chapters from internationally recognized security experts, this volume details attacks, encryption, authentication, watermarking, key management, secure routing, and secure aggregation, location, and cross-layer. It offers insight into attacking and defending routing mechanisms in ad hoc and sensor networks, and analyzes MAC layer attacks in 802.15.4 sensor networks.
Nodes in a mobile ad hoc network need to thwart various attacks and ma- licious activities. This is especially true for the ad hoc environment where there is a total lack of centralized or third-party authentication and security architectures. This paper presents a game-theoretic model to analyze intrusion detection in mobile ad hoc networks. We use game theory to model the interactions between the nodes of an ad hoc network. We view the interaction between an attacker and an individual node as a two player non-cooperative game, and construct models for such a game.
This paper presents a novel countermeasure against Distributed Denial-of-Service (DDoS) attacks that we call the rouTer poRt mArking and paCKet filtering (TRACK), which includes the functions of both IP traceback and packet filtering. TRACK is a comprehensive solution that is composed of two components: a router port marking module and a packet filtering module. The former is a novel packet marking scheme for IP traceback and the latter is a novel packet filtering scheme that utilizes the information gathered from the former component. The router port marking scheme marks packets by probabilistically writing a router interface’s port number, a locally unique 6-digit identifier, to the packets it transmits. After collecting the packets marked by each router in an attacking path, a victim machine can use the information contained in those packets to trace the attack back to its source (i.e., solve the “IP traceback” problem). In the packet filtering component, the information contained in the same packets are used to filter the malicious packets at the upstream routers (i.e., routers located in the direction towards the attackers), thus effectively mitigating attacks. Because very little space is required to mark a port number, TRACK allows us to include attack signature information along with the port number within a single packet’s IP header. The resulting advantage is three fold: (1) a significantly less number of packets need to be collected to traceback the attack source compared to previous IP traceback schemes, (2) very little computation overhead is required in the traceback process, and (3) scalability: a large number of attackers (i.e., zombies) can be traced back efficiently. Because TRACK uses the router interface instead of the entire router as the “atomic unit” for IP traceback and packet filtering, it can accomplish these tasks with much finer granularity, which helps to lower the false positives. In the paper, we also show that TRACK supports gradual deployment .
Mobile Ad hoc NETworks (MANETs) are decentralized environments comprised of mobile computing devices that interact among each other via multi-hop wireless links. MANET nodes forward packets on behalf of other nodes in the network. Such routing decisions are made autonomously by individual nodes. MANET characteristics make them highly vulnerable to a myriad of physical and cyber attacks. Cryptographic solutions, while effective for maintaining confidentiality and authentication, cannot mitigate some critical attacks on MANET availability, in particular insider and protocol-compliant routing disruption Denial-of-Service (DoS) attacks. This paper proposes a novel secure routing architecture for MANET called ThroughpUt-Feedback (TUF) routing, which is designed to be resilient against most known forms of routing disruption DoS attacks. Our approach is to monitor the end-to-end "good" throughput (or "goodput") of closed-loop flows to detect attacks that are impossible to detect using existing methods operating at the network layer. A major advantage of the TUF architecture is that it can be readily integrated into on-demand source routing protocols. TUF provides mechanisms that monitor the goodput of the current route to detect abnormalities (e.g., node or link failures, DoS attacks, etc.), and then initiates a route rebuilding process once the route has been determined to be abnormal. TUF is agile in that it is designed in a way that allows it to limit control overhead by using low-overhead schemes until an attack condition requires the use of higher-overhead route management schemes. Using analysis and simulations, we show that the TUF architecture is resilient against a wide range of attacks, including protocol-compliant (also known as "JellyFish") attacks.
Attack mitigation schemes actively throttle attack traffic generated in Distributed Denial-of-Service (DDoS) attacks. This paper presents Attack Diagnosis (AD), a novel attack mitigation scheme that combines the concepts of Pushback and packet marking. AD’s architecture is inline with the ideal DDoS attack countermeasure paradigm, in which attack detection is performed near the victim host and attack mitigation is executed close to the attack sources. AD is a reactive defense that is activated by a victim host after an attack has been detected. A victim activates AD by sending AD-related commands to its upstream routers. On receipt of such commands, the AD enabled upstream routers deterministically mark each packet destined for the victim with the information of the input interface that processed that packet. By collecting the router interface information recorded in the packet markings, the victim can trace back the attack traffic to the attack sources. Once the traceback is complete, the victim issues messages that command AD-enabled routers to filter attack packets close to the source. The AD commands can be authenticated by the TTL field of the IP header without relying on any global key distribution infrastructure in Internet. Although AD can effectively filter traffic generated by a moderate number of attack sources, it is not effective against large-scale attacks. To address this problem, we propose an extension to AD called Parallel Attack Diagnosis (PAD) that is capable of throttling traffic coming from a large number of attack sources simultaneously. AD and PAD are analyzed and evaluated using a realistic network topology based on the Skitter Internet map. Both schemes are shown to be robust against IP spoofing and incur low false positive ratios.
With the ever increasing deployment and usage of gigabit networks, traditional network anomaly detection based intrusion detection systems have not scaled accordingly. Most, if not all, systems deployed assume the availability of complete and clean data for the purpose of intrusion detection. We contend that this assumption is not valid. Factors like noise in the audit data, mobility of the nodes and the large amount of network data generated by the network make it difficult to build a normal traffic profile of the network for the purpose of anomaly detection. From this perspective, we present an anomaly detection scheme, called SCAN (stochastic clustering algorithm for network anomaly detection), that has the capability to detect intrusions with high accuracy even when audit data is not complete. We use the expectation-maximization algorithm to cluster the incoming audit data and compute the missing values in the audit data. We improve the speed of convergence of the clustering process by using Bloom filters and data summaries. We evaluate SCAN using the 1999 DARPA/Lincoln Laboratory intrusion detection evaluation data set.
Large-scale, high-profile Distributed Denial-of-Service (DDoS) attacks have become common recurring events that increasingly threaten the proper functioning and continual success of the Internet. Recently, client puzzle protocols have been proposed as a mitigation technique for DoS attacks. These protocols require a client to solve a cryptographic “puzzle” before it receives any service from a remote server. By embedding the client puzzle mechanism into the lowest layer of the Internet protocol stack that is vulnerable against network DoS attacks—the network layer—we can mitigate the most virulent form of DoS attacks: flooding-based DDoS attacks. This paper describes the framework of a novel IP-layer client puzzle protocol that we call Chained Puzzles. We describe the framework in detail and show its effectiveness using simulation results.
A wireless sensor network (WSN) typically consists of a large number of small sensor nodes and one or more high-end control and data aggregation nodes. Sensor nodes have limited computation and communication capabilities, and communicate via wireless links. Consequently, WSNs are highly vulnerable to attacks. This vulnerability is exacerbated when WSNs have to operate unattended in a hostile environment, such as battlefields. In this paper, we propose a novel self-organizing key management scheme for large-scale WSNs, called Sunivable and efficient clustered keying (SECK). Our scheme was designed specifically to address the key management issues within the low-tier of a hierarchical network architecture. Previous approaches for WSN key management adequately addressed operational issues, but to a large extent, ignored robustness and recoverability issues. Using simulation and analysis, we show that SECK is highly robust against key and node captures, and has noteworthy advantages over other key management schemes.
Encryption algorithms can be used to help secure wireless communications, but securing data also consumes resources. The goal of this research is to provide users or system developers of personal digital assistants and applications with the associated time and energy costs of using specific encryption algorithms. Four block ciphers (RC2, Blowfish, XTEA, and AES) were considered. The experiments included encryption and decryption tasks with different cipher and file size combinations. The resource impact of the block ciphers were evaluated using the latency, throughput, energy latency product, and throughput/energy ratio metrics.
We found that RC2 encrypts faster and uses less energy than XTEA, followed by AES. The Blowfish cipher is a fast encryption algorithm, but the size of the plaintext affects its encryption speed and energy consumption. Faster algorithms seem to be more energy efficient because of differences in speed rather than differences in power consumption levels while encrypting.
We address the problem of providing guaranteed quality-of-service (QoS) connections over a multifrequency time-division multiple-access (MF-TDMA) system that employs differential phase-shift keying (DPSK) with various modulation modes. The problem can be divided into two parts-resource calculation and resource allocation. We present algorithms for performing these two tasks and evaluate their performance in the case of a Milstar extremely high frequency satellite communication (EHF-SATCOM) system. In the resource-calculation phase, we calculate the minimum number of timeslots required to provide the desired level of bit-error rate (BER) and data rate. The BER is directly affected by the disturbance in the link parameters. We use a Markov modeling technique to predict the worst case disturbance over the connection duration. The Markov model is trained offline to generate a transition-probability matrix, which is then used for predicting the worst case disturbance level. We provide simulation results to demonstrate that our scheme outperforms the scheme currently implemented in the EHF-SATCOM system. The resource-allocation phase addresses the problem of allocating actual timeslots in the MF-TDMA channel structure (MTCS). If we view the MTCS as a collection of bins, then the allocation of the timeslots can be considered as a variant of the dynamic bin-packing problem. Because the this problem is known to be NP-complete, obtaining an optimal packing scheme requires a prohibitive amount of computation. We propose a novel packing heuristic called reserve channel with priority (RCP) fit and show that it outperforms two common bin-packing heuristics.
Over the past few years, denial of service (DoS) attacks have become more of a threat than ever. DoS attacks are aimed at denying or degrading service for a legitimate user by exhausting the resources for a particular system. Client puzzle protocols have received attention in recent years as a method for combating DoS attacks. In a client puzzle protocol, the client is forced to solve a cryptographic puzzle before it can establish a connection with a remote server. This paper introduces a novel client puzzle protocol that utilizes a modification of the Extended Tiny Encryption Algorithm. An implementation of the client puzzle protocol was completed in the TCP stack of the Mandrake Linux 9.2 operating system. We call this modification to the TCP stack pTCP (for Puzzle TCP). Our client puzzle algorithm is very fast, and is portable to other systems and architectures. More importantly, it is very effective against connection depletion DoS attacks and other resource exhaustion DoS attacks (on the server) because minimal computation load is imposed on the server to verify the solution to a given puzzle. Our client puzzle protocol is also effective against various other resource exhaustion attacks within the transport layer, and can help prevent attacks that exist at the application layer. In this paper, we describe our client puzzle protocol in detail, and show its effectiveness against DoS attacks by using experimental results.
Nodes in a mobile ad hoc network need to come up with counter measures against malicious activity. This is more true for the ad hoc environment where there is a total lack of centralized or third party authentication and security architectures. This paper presents a game-theoretic method to analyze intrusion detection in mobile ad hoc networks. We use game theory to model the interactions between the nodes of an ad hoc network. We view the interaction between an attacker and an individual node as a two player noncooperative game, and construct models for such a game.
We describe a novel method for authenticating multicast packets that is robust against packet loss. Our focus is to minimize the size of the communication overhead required to authenticate the packets. Our approach is to encode the hash values and the signatures with Rabin's Information Dispersal Algorithm (IDA) to construct an authentication scheme that amortizes a single signature operation over multiple packets. This strategy is especially efficient in terms of space overhead, because just the essential elements needed for authentication (i.e., one hash per packet and one signature per group of packets) are used in conjunction with an erasure code that is space optimal. Using asymptotic techniques, we derive the authentication probability of our scheme using two different bursty loss models. A lower bound of the authentication probability is also derived for one of the loss models. To evaluate the performance of our scheme, we compare our technique with four other previously proposed schemes using empirical results
A novel certified e-mail protocol that is particularly suitable for mobile environments is described. Our protocol uses an off-line trusted third party (TTP). Protocols with an off-line TTP-also known as optimistic protocols-have numerous practical advantages over protocols with an on-line TTP. Nonetheless, many protocols adopt an on-line TTP primarily because optimistic protocols often entail intricate cryptographic primitives that incur considerable overhead. By using a novel signature paradigm, which we call gradational signatures, we show that it is possible to construct optimistic protocols that are comparable to on-line protocols in terms of computation and communication overhead. This makes our scheme especially desirable in the mobile setting.
Applications such as e-commerce payment protocols, electronic contract signing, and certified e-mail delivery require that fair exchange be assured. A fair-exchange protocol allows two parties to exchange items in a fair way so that either each party gets the other's item, or neither party does. We describe a novel method of constructing very efficient fair-exchange protocols by distributing the computation of RSA signatures. Specifically, we employ multi signatures based on the RSA-signature scheme. To date, the vast majority of fair-exchange protocols require the use of zero-knowledge proofs, which is the most computationally intensive part of the exchange protocol. Using the intrinsic features of our multi signature model, we construct protocols that require no zero-knowledge proofs in the exchange protocol. Use of zero-knowledge proofs is needed only in the protocol setup phase--this is a one-time cost. Furthermore, our scheme uses multi signatures that are compatible with the underlying standard (single-signer) signature scheme, which makes it possible to readily integrate the fair-exchange feature with existing e-commerce systems.
Fueled by the exponential growth in the number of people with access to the Internet, electronic-commerce (e-commerce) transactions via the Internet have become a major part of our economy. For a wider range of e-commerce applications to take advantage of the untapped business potential of the Internet, some challenging and interesting security problems need to be solved. In this thesis, we study two such problems, and provide efficient solutions for both.
In the foreseeable future, some e-commerce vendors will generate revenue by providing digital streaming applications such as information broadcasts (e.g., stock quotes). For the first issue, we investigate the problem of authenticating packet streams in multicast or broadcast networks. Our approach is to encode the hash values and digital signatures with Rabin's Information Dispersal Algorithm (IDA) to construct an authentication scheme that amortizes a single signature operation over multiple packets. This strategy is especially efficient in terms of space overhead because just the essential elements needed for authentication (i.e., one hash per packet and one signature per group of packets) are used in conjunction with an erasure code that is space optimal. We evaluate the performance of our scheme using both analytical and empirical results.
Applications such as e-commerce payment protocols using electronic money require that fair exchange be assured. For the second issue, we investigate the problem of constructing fair-exchange protocols. Our approach uses a novel signature paradigm---the gradational signature scheme---to construct protocols that are efficient and scalable. Unlike previous approaches, our scheme does not employ any costly zero-knowledge proof systems in the exchange protocol. Use of zero-knowledge proofs is needed only in the protocol setup phase-this is a one-time cost. The resulting exchange protocol is more efficient than the previous solutions in terms of computation and communication overhead.
We describe a novel method for authenticating multicast packets that is robust against packet loss. Our main focus is to minimize the size of the communication overhead required to authenticate the packets. Our approach is to encode the hash values and the signatures with Rabin’s Information Dispersal Algorithm (IDA) to construct an authentication scheme that amortizes a single signature operation over multiple packets. This strategy is especially efficient in terms of space overhead, because just the essential elements needed for authentication (i.e., one hash per packet and one signature per group of packets) are used in conjunction with an erasure code that is space optimal. To evaluate the performance of our scheme, we compare our technique with four other previously proposed schemes using analytical and empirical results. Two different bursty loss models are considered in the analyses.
In this paper, we address the problem of providing guaranteed quality of service (QoS) channels over multifrequency time division multiple access (MF-TDMA) systems that employ DPSK with multiple modulation modes. The two QoS measures that we consider are the bit error rate (BER) and the data rate. We treat the data rate as a deterministic QoS measure, and the BER as a statistical QoS measure. Our approach is divided into two phases: resource calculation and resource allocation. In the resource calculation phase, we calculate the number of timeslots required to provide the desired level of QoS. We treat this as a disturbance prediction problem and present a Markov model based scheme for solving it. We compare the performance of this scheme with that of the scheme implemented in the Extremely High Frequency Satellite Communication (EHF-SA TCOM) systems, which are jointly used by the four military services. The resource allocation phase addresses the problem of allocating actual timeslots in the MF-TDMA channel structure (MTCS). The MTCS allows flexibility in capacity allocation, but suffers $-om inefficiencies caused by fragmentation. Here we propose a novel packing scheme called the Reserve Channel with Priority (RCP)Jit, and show that it outperforms the first-fit and the best-fit algorithms in the cases considered.
Using simulations, the authors evaluate the performance of the vestigial sideband QPSK modulated wideband CDMA (VSB/QPSK/W-CDMA) system. A VSB system is used to increase the spectrum efficiency by 30% compared to the conventional double sideband (DSB) system. The VSB/QPSK/W-CDMA scheme showed a comparable performance to the DSB/QPSK/W-CDMA scheme in the frequency flat fading channel. However, the VSB system showed an inferior performance in the presence of multipath fading and multiple access interference. It was shown that the use of additional Rake branches is effective in compensating for the VSB system's performance degradation.
Using simulations, the authors evaluate the performance of the vestigial sideband QPSK modulated wideband CDMA (VSB/QPSK/W-CDMA) system. A VSB system is used to increase the spectrum efficiency by 30% compared to the conventional double sideband (DSB) system. The VSB/QPSK/W-CDMA scheme showed a comparable performance to the DSB/QPSK/W-CDMA scheme in the frequency flat fading channel. However, the VSB system showed an inferior performance in the presence of multipath fading and multiple access interference. It was shown that the use of additional Rake branches is effective in compensating for the VSB system's performance degradation.