Impact Analysis of Standardized GNSS Receiver Testing against Real-World Interferences Detected at Live Monitoring Sites

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Impact Analysis of Standardized GNSS Receiver Testing against Real-World Interferences Detected at Live Monitoring Sites"

Transcription

1 sensors Article Impact Analysis Standardized GNSS Receiver Testing against Real-World Interferences Detected at Live Monitoring Sites Mohammad Zahidul H. Bhuiyan 1, *, Nunzia Giorgia Ferrara 1, Amin Hashemi 1, Sarang Thombre 1, Michael Pattinson 2 and Mark Dumville 2 1 Finnish Geospatial Research Institute, National Land Survey Finland, Kirkkonummi, Finland; (N.G.F.); (A.H.); (S.T.) 2 Nottingham Scientific Limited, Nottingham NG7 2TU, UK; (M.P.); (M.D.) * Correspondence: Tel.: Received: 24 January 2019; Accepted: 5 March 2019; Published: 13 March 2019 Abstract: GNSS-based applications are susceptible to different threats, including radio frequency interference. Ensuring that new applications can be validated against latest threats supports wider adoption and success GNSS in higher value markets. Therefore, availability standardized GNSS receiver testing procedures is central to developing next generation receiver technologies. The EU Horizon2020 research project STRIKE3 (Standardization GNSS Threat reporting and Receiver testing through International Knowledge Exchange, Experimentation and Exploitation) proposed standardized test procedures to validate different categories receivers against real-world interferences, detected at different monitoring sites. This paper describes recorded interference signatures, ir use in standardized test procedures, and analyzes result for two categories receivers, namely mass-market and pressional grade. The result analysis in terms well-defined receiver key performance indicators showed that performance both receiver categories was degraded by selected interference threats, although re was considerable difference in degree and nature ir impact. Keywords: Global Navigation Satellite Systems (GNSS); radio frequency interference; receiver test standard 1. Introduction Global Navigation Satellite System (GNSS) technology plays an important role in an ever-expanding range safety, security, business, and policy critical applications. Moreover, many critical infrastructures, such as telecommunication networks, energy grids, stock markets, etc. rely, at least to some degree, on uninterrupted access to GNSS positioning, navigation, and timing services [1]. At same time however, different threats have emerged which have potential to degrade, disrupt, or completely deny se GNSS services without prior warning [2]. This paper focusses on threat posed by Radio Frequency Interference (RFI) to GNSS L1/E1 spectrum. RFI can be emitted eir unintentionally, e.g., by commercial high-power transmitters, ultra-wideband radar, television, VHF, mobile satellite services, and personal electronic devices, or intentionally, e.g., by jammers and more sophisticated signal spoing devices [2,3]. Due to proliferation GNSS-based applications to almost every part globe it can be assumed that this phenomenon RFI threat to GNSS is no longer seen as a local or regional problem, but rar requires an international perspective. To ensure GNSS is protected, re is now a need to also respond at an international level by ensuring that re is following: (i) A common standard Sensors 2019, 19, 1276; doi: /s

2 Sensors 2019, 19, for real-world GNSS threat monitoring and reporting, and (ii) a global standard for assessing performance GNSS receivers and applications under threat. GNSS threat reporting standards would allow for compilation real-world threats into a database that could be analyzed to develop GNSS receiver testing standards that ensure new applications are validated against latest threats [4 7]. Both se standards are currently missing across all civil application domains and, if implemented, could potentially open field for wider adoption and success GNSS in higher value markets. The EU Horizon2020 research project STRIKE3 (Standardization GNSS Threat reporting and Receiver testing through International Knowledge Exchange, Experimentation and Exploitation) is a European initiative that addresses need to monitor, detect, and characterize GNSS threats and to propose standardized testing procedures to validate different categories receivers against se threats [4 7]. Importantly, STRIKE3 has deployed an international network GNSS interference monitoring sites that monitor interference on a global scale and capture real-world threats for furr analysis [8]. Utilizing thousands threats collected from this network over a three-year period, STRIKE3 has developed a baseline set typical real-world interference/jamming threats that can be used to assess performance different categories GNSS receivers. This baseline set, which is used in this paper to assess performance two categories receivers, consists five different threat signatures as follows: Wide swept frequency with fast repeat rate, multiple narrow band signals, triangular and triangular wave swept frequency, and tick swept frequency. Additional details about behavior se five threat signatures in time and frequency domain and criteria for ir selection can be found in [9]. The STRIKE3 project also proposed standardized testing procedures for GNSS receivers, which make use se threat signatures for receiver validation. Collectively, above activities aim to improve mitigation and resilience future GNSS receivers against real-world interference threats. An example such an attempt has already been made in [3], where an adaptive interference mitigation technique was proposed to counteract unintentional Narrow-Band Interference (NBI) detected at one STRIKE3 monitoring sites. In [10], European Telecommunications Standards Institute s (ETSI) Satellite Communication and Navigation (SCN) group Technical Committee Satellite Earth Stations and Systems (TC SES) suggested to elaborate a firm and common standard for GNSS-based location systems. The group addresses interference from perspective robustness to interference, interference localization, and GNSS denial survival in addition to formulating location systems minimum performance requirements. In particular, [11] discusses GNSS based applications and standardization needs more inclusively. In addition, Resilient Navigation and Timing Foundation urges for development standards for interference-resistant receivers to include ARAIM (Advanced Receiver Autonomous Integrity Monitoring) and RAIM to protect, toughen, and augment GNSS [12]. The international aviation community has also been considering updating standards regarding GNSS aviation receivers, improving ir ability to withstand interference from repeaters, pseudolites, and jammers [13]. In order to address need from various GNSS stakeholders, STRIKE3 consortium proposed a draft standard for GNSS receiver testing against real-world interferences detected at different STRIKE3 monitoring sites, along with a test campaign following guidelines from drafted receiver test standard. This paper discusses how receiver testing is done and presents some results obtained within STRIKE3 testing activity. In particular, paper addresses mass market and pressional grade receiver testing. The test platform and test methodology are described in Sections 2 and 3, respectively. Section 4 provides an overview receiver configuration and simulated GNSS scenario settings, whereas Section 5 presents five threats chosen for tests. Results tests carried out are presented and analyzed in Section 6 and, finally, conclusions are drawn in Section 7.

3 Sensors 2019, 19, Test Platform This Section describes physical platform used to test mass-market and pressional grade GNSS receivers. For both categories, test approach is following: 1. The first iteration testing is performed using a clean GNSS signal without presence any interference threats. This helps establish baseline performance Receiver Under Test (RUT). 2. Each subsequent test case addresses one threat scenario (one type interference signal). Thus, performance receiver is recorded sequentially against different interference signals. 3. Analysis recorded results is carried out to show receiver behavior under conditions different interference signals in terms pre-defined performance metrics. This performance is compared to baseline under nominal signal conditions. The test set-up, which is shown in Figure 1, includes following equipment: 1. Spectracom GSG-6 RF GNSS constellation simulator [14], 2. Keysight Vector Signal Generator N5172B [15], 3. RUT, and 4. Laptop for data storage and analysis. In addition, generic components such as coaxial cables and signal combiners are used. The clean GNSS signal is generated from a multi-constellation, multi-frequency Spectracom GSG-6 hardware simulator, whereas threat signature is generated using Keysight s Vector Signal Generator (VSG) [15] through replay raw I/Q (In-phase/Quad-phase) sample data. The raw I/Q data are captured in field from STRIKE3 monitoring sites and are used as input to VSG, which n re-creates detected threat by continuously replaying data in a loop. Both GNSS signal simulator and VSG are controlled via stware in order to automate testing process. The automation script is used to control se devices remotely and to limit human intervention. The script also provides synchronization between two instruments in order to ensure repeatability tests and reliability results. A laptop is used to record and analyze performance receiver against different threat signals. The analysis is performed using a MATLAB-based script that processes NMEA output messages from RUT. 3. Receiver Test Methodology Three different test methodologies are performed for each receiver category. A brief overview each test methodology is presented as follows Baseline Test Method A clean GNSS signal, in absence interference, is fed to RUT to validate its performance under nominal condition. The total duration this test is 60 min.

4 Sensors 2019, 19, Sensors 2019, 19, x FOR PEER REVIEW Sensors 2019, 19, x FOR PEER REVIEW Figure 1. Receiver test platform. Figure 1. Receiver test platform Time to to Recompute Position Position (TTRP) (TTRP) Solution Solution Test Method Test Method 3.2. Time to Recompute Position (TTRP) Solution Test Method This test is is used to to measure time taken for RUT to to recover immediately after a strong interference This test event. is used The to interference measure is is time switched taken on on for min min RUT after after to recover simulated immediately scenario after starts starts a strong and andit it is is interference applied appliedfor for event s. s. The The The interference interference is power power switched is isfixed on 14 to to min a value after such simulated that receiver scenario loses starts position and it is solution. applied for In 90 tests, s. The interference interference power power is corresponds fixed to a value to to a Jamming-to-Signal such that receiver (J/S) loses ratio position 90 db. The solution. time taken In between tests, switching interference f power interference corresponds source to a and Jamming-to-Signal first fix is recorded (J/S) ratio as 90 TTRP, db. after The time interference. taken between The prile switching this f this test test methodology, interference source whose and whose total total duration first fix duration is is 30 recorded is 30 min, min, is illustrated as TTRP, is illustrated in Figure after interference. in Figure 2. The prile this test methodology, whose total duration is 30 min, is illustrated 2. in Figure 2. Figure Figure 2. TTRP test prile TTRP test prile Sensitivity Test Test Method 3.3. Sensitivity Test Method This test is is conducted by by varying power interfering signal. signal. The The interference is turned is turned on This test is conducted by varying power interfering signal. The interference is turned 10 on min 10 after min after simulation simulation starts and starts it follows and it afollows two-peak two-peak ramp power ramp prile. power Theprile. initial interference The initial on 10 min after simulation starts and it follows a two-peak ramp power prile. The initial power interference is suchpower that J/S is such is 5 db that and J/S is n db and interference n interference power is increased power is by increased 5 db every by 45 db severy until interference power is such that J/S is 5 db and n interference power is increased by 5 db every reaching 45 until reaching first power first peak, power which peak, corresponds which corresponds to a J/S to 65 db. J/S After 65 it db. reaches After it reaches first peak, first 45 s until reaching first power peak, which corresponds to a J/S 65 db. After it reaches first interference peak, interference power is decreased. power is decreased. The power The step power andstep and dwell duration dwell duration per power per step power arestep again, are peak, interference power is decreased. The power step and dwell duration per power step are respectively, again, respectively, 5 db and db 45and s. 45 The s. entire The entire process process is repeated is repeated a second second time. time. The The prile this test again, respectively, 5 db and 45 s. The entire process is repeated a second time. The prile this test methodology, whose total duration is is 60 min, is is illustrated in in Figure methodology, whose total duration is 60 min, is illustrated in Figure 3.

5 Sensors 2019, 19, Sensors 2019, 19, x FOR PEER REVIEW 5 20 Figure 3. Sensitivity test prile. Figure 3. Sensitivity test prile. In order to assess performance RUT in presence interference, different metrics are In order to assess performance RUT in presence interference, different metrics observed. The following outputs from GNSS receiver are recorded and analyzed for all test are observed. The following outputs from GNSS receiver are recorded and analyzed for all methodologies: test methodologies: 1. Number tracked satellites; 1. Number tracked satellites; 2. Position fix indicator; 2. Position fix indicator; 3. Number satellites used in fix; Carrier-to-Noise Number satellites density used (C/N in fix; 0 ) ratio; East-North-Up Carrier-to-Noise deviations density (C/N0) receiver s ratio; solution from true position. 5. East-North-Up deviations receiver s solution from true position. Moreover, depending on test methodology, additional parameters are evaluated. In case TTRP Moreover, test method, depending time on taken test for methodology, RUT to re-obtain additional a position parameters fix after are evaluated. a strong interference In case event TTRP is measured. test method, Or metrics time taken are for interest RUT into re-obtain case a position sensitivity fix after test method. a strong In interference particular, event Jamming-to-Signal is measured. Or ratio metrics at which are position interest solution in case is no longer sensitivity available test and method. availability In particular, position Jamming-to-Signal solution during ratio at which interference position event solution are computed is no longer in available sensitivity and tests. availability Furrmore, position maximumsolution horizontal during and vertical interference position event errors are computed are computed in for sensitivity interval tests. in Furrmore, which interference maximum is present horizontal when and vertical receiverposition fers a valid errors position are computed fix. for interval in which interference is present when receiver fers a valid position fix. 4. Receiver Configuration and Simulated Scenario Settings 4. Receiver For Configuration sake simplicity, and Simulated GNSS receiver Scenario configuration Settings (as defined in terms constellation and frequency For sake bands tosimplicity, be used in validation) GNSS receiver is restricted configuration to GPS L1 single (as defined frequency in terms (GPS SF) and Galileo constellation E1 single and frequency (GALILEO bands to SF). be The used RUT in was first validation) configured is restricted in factory settings to GPS mode L1 single and n frequency all (GPS necessary SF) and modifications, Galileo E1 single basedfrequency on requirements (GALILEO SF). each The individual RUT was first test case configured scenario, in were factory applied. settings It must mode be and noted, n however, all necessary that test modifications, procedure and based set-up on is also requirements equally applicable each to individual or receiver test case configurations. scenario, were applied. It must be noted, however, that test procedure and set-up Since is also equally RUT compensates applicable to for or receiver atmospheric configurations. effects, atmospheric modelling capability is required Since in RUT testing compensates activity. Thefor GNSS atmospheric simulator supports effects, aatmospheric models modelling to simulate capability such is effects required and, in in testing tests, activity. Klobuchar The and GNSS Saastamoinen simulator supports models were a used formodels ionosphere to simulate andsuch troposphere, effects and, in respectively. tests, Klobuchar The receiver and was Saastamoinen configuredmodels in staticwere stand-alone used for mode. ionosphere Table 1 provides and an troposphere, overview respectively. simulated The scenario receiver settings, was configured including in also static stand-alone start time, mode. duration, Table 1 provides GNSS signal an overview power, and simulated interference scenario power settings, levels for including different also test start methodologies. time, duration, GNSS signal power, and interference power levels for different test methodologies.

6 Sensors 2019, 19, Constellation Number satellites Table 1. Simulated scenario settings. GPS + Galileo 10 (GPS) + 10 (Galileo) Centre frequency (MHz) Ionosphere model Troposphere model Klobuchar (default parameters) Saastamoinen (default parameters) Start time :00:00 Duration (min) GNSS signal power (dbm) Interference power level for TTRP test method (dbm) Interference power range for Sensitivity test method (dbm) 30 (TTRP) / 60 (sensitivity test and baseline) [ 120: 5: 60] J/S range for Sensitivity test method (db) [5: 5: 65] Receiver location (Lat/Long/Alt) 60 N/24 E/30 m When performing tests, an elevation mask and a C/N 0 mask are applied for receiver s PVT computation. Satellites with elevations lower than 5 as well as satellites whose C/N 0 are lower than 25 db-hz are excluded from position computation, as reported in Table 2. At beginning every test, RUT is in cold start mode. Table 2. Receiver configuration. C/N 0 threshold for satellite in Navigation Computation (db-hz) 25 Minimum elevation angle Start mode Dynamic model 5 Cold start Stationary 5. Real-World Interference Classification The purpose proposed test standards is not to propose a fixed set threats to cover all types signal, but instead to develop draft standards for testing receivers against interference that has been observed in real world through dedicated interference monitoring network. Using real interference threats that have been detected in field allows interested parties (e.g., certification bodies, application developers, receiver manufacturers, etc.) to better assess risk to GNSS performance during operations and to develop appropriate counter-measures. However, with many thousands potential threats being detected by monitoring network, it is impractical to test receivers against all detected threats. Therefore, an initial selection process has been done and a baseline set 5 types threats has been selected for inclusion in draft test standard for receiver testing [9]. In particular, each receiver is tested against following 5 types interference threats, described in Table 3: Wide swept frequency with fast repeat rate, multiple narrow band signals, triangular and triangular wave swept frequency, and tick swept frequency. Each se threats is generated by VSG replaying raw I/Q data captured in field during a real event, if not mentioned orwise. In Table 3, each type threat is explained via an example plot with two figures. The left figure denotes estimated power spectral density at GNSS L1/E1 carrier frequency ±8 MHz and right figure represents spectrogram perceived interference. As can be seen in example plots, spectrogram color map is defined so that blue color represents least signal power, whereas red color represents most signal power.

7 signal power, whereas red color represents most signal power. Sensors 2019, 19, x FOR PEER REVIEW 7 20 example plots, spectrogram color map is defined so that blue color represents least Table 3. Description baseline set selected threats. signal power, red color represents most signal power. Sensors 2019, 19, xwhereas FOR PEER REVIEW 20 example plots, spectrogram color map is defined so that blue color represents 7least signal power, whereas red color represents most signal power. Threat Table 3. Description Reason for baseline set so selected example plots,type spectrogram color map is defined that threats. blue color represents least Example plot Sensors 2019, 19, Id signal power, whereas Table red 3. color represents most signal power. signal choice Description baseline set selected threats. Threat Type Reason for Example Table 3. Description baseline setplot selected threats. Id signal Table 3. Description baseline set selected threats. Reason choicefor Threat Type Example plot Id Very common signal choice Threat Type Reason for Type Example plot (totalreason Wide SweepExample Plot for Choice Threat IdId Signal signal choice Veryevents, common 01 fast repeat Widerate Sweepfast repeat Wide SweepWide rate fast repeat Wide SweepSweep-fast rate repeat rate fast repeat rate Multiple narrowband Multiple Multiple narrowband Multiple narrowband 02 narrowband Multiple narrowband (total Very common and events, (total Verysites) common and events, Very common (total (total events, and sites) sites) andevents, sites) and Verysites) common (total Veryevents common (total Very common and events Very common (total (total Verysites) common events and sites) and (totalevents sites) and events sites) and sites) wave wave 04 ( sites) Common ( sites) Common ( sites) Common ( sites) Common ( sites) Common Common ( sites) ( sites) Common wave ( sites) Common wave ( sites) Common wave ( sites) Sensors , 19, x FOR PEER REVIEW 04 Common 8 20 Quite common Tick Tick Quite common. Evolving Evolving threatthreat (new type) (new type) TestResults Results Test Thissection section presents presents standardized teststests for receivers, This results results standardized fortwo categories two categories receivers, Mass-Market (MM) and PROfessional (PRO) RUT. An initial version test results was published Mass-Market (MM) and PROfessional (PRO) RUT. An initial version test results was published in [16], where results were analyzed only for one interference signature. The published article in in [16], where results were analyzed only for one interference signature. The published article [16] only fered a general overview on impact on receivers when exposed to real-world in [16] only fered a general overview on impact on receivers when exposed to real-world interference and it lacked a thorough analysis receivers performance under different interference and(e.g., it lacked thorough analysis impact receivers performance under different circumstances circumstances impacta C/N0 thresholding, how interference signal is generated, (e.g., impact C/N thresholding, impact how interference signal is generated, impact 0 impact various interference signatures, impact on dual frequency receiver, etc.). Therefore, an various interference signatures, impact onisdual frequency receiver, etc.). Therefore, an extensive result extensive result analysis receiver testing presented as follows. analysis receiver testing isapresented as follows. Foreach RUT category, summary table provides a comparison between impacts 5 selected types interference. In particular, results are shown in terms following metrics: 1. Maximum horizontal position error during interference interval; 2. Maximum vertical position error during interference interval; 3. Position fix availability during interference interval; 4. Jamming-to-Signal ratio at which position fix is lost for at least 5 s (J/SPVT_lost);

8 Sensors 2019, 19, For each RUT category, a summary table provides a comparison between impacts 5 selected types interference. In particular, results are shown in terms following metrics: 1. Maximum horizontal position error during interference interval; 2. Maximum vertical position error during interference interval; 3. Position fix availability during interference interval; 4. Jamming-to-Signal ratio at which position fix is lost for at least 5 s (J/S PVT_lost ); 5. Jamming-to-Signal ratio at which position fix is reobtained for at least 5 s (J/S PVT_reobtained ); 6. TTRP. The receiver s performance under conditions different interference signals is compared to baseline under nominal signal conditions. For baseline case, statistics are computed by considering interval corresponding to one when, in interference test case, interference would be on. This ensures that differences in test statistics receiver performance, between a baseline test case and selected threat test case, is solely due to impact threat signature on GNSS signal Mass Market Receiver The performance MM RUT is summarized in Table 4. Whenever an interfering signal is present, as its power increases, receiver performance degrades until, at some point, position fix is lost. The receiver is n capable to re-compute a position solution only when interference power decreases. An example such interference impact on mass-market RUT performance is given in Figures 4 and 5, which show, respectively, average C/N 0 tracked satellites and East-North-Up deviations position solution for test case MM02-STATIC-SENSITIVITY (threat signature 02: multiple narrowband interfering signals). In both figures, corresponding two-peak ramp interference power prile could also be seen (in right-hand Y-axis) for sensitivity test methodology, as described in Section 3.3. The inaccurate position solution, especially in vertical component computed by RUT at beginning test, is due to cold start and resulting unavailability ionospheric parameters and to convergence navigation filter. As it can be seen from Table 4, Jamming-to-Signal ratio at which position fix is lost for at least 5 s (J/S PVT_lost ) and Jamming-to-Signal ratio at which position fix is reobtained for at least 5 s (J/S PVT_reobtained ) are in range db and db, respectively. This translates into a position fix availability in presence five interference threats between ~60% and ~70%. The triangular swept frequency interference signature (i.e., test case MM03) seems to be most impactful, both in terms availability and maximum position error. Table 4. Test results for MM RUT. Test Case Maximum Horizontal Position Error (m) Maximum Vertical Position Error (m) Position Fix Availability J/S PVT_lost (db) J/S PVT_reobtained (db) TTRP * (s) Baseline % MM % MM % MM % MM % MM % * TTRP values are otained with TTRP test cases and or matrices (e.g., maximum horizontal position error, maximum vertical position error, etc.) are obtained eir in baseline test case or in different sensitivity test cases, as represented by test id.

9 Sensors 2019, 19, x FOR PEER REVIEW 9 20 fix availability in presence five interference threats between ~60% and ~70%. The triangular swept fix availability frequency in interference presence signature five (i.e., interference test case MM03) threats seems between to ~60% be and most ~70%. impactful, The triangular both in terms swept frequency availability interference and maximum signature position (i.e., error. test case MM03) seems to be most impactful, both in terms availability and maximum position error. Sensors 2019, 19, Figure 4. Average C/N0 tracked satellites for MM RUT in presence threat signature 02. Figure Figure Average Average C/N C/N0 0 tracked tracked satellites satellites for for MM MM RUT RUT in in presence presence threat threat signature signature Figure 5. East-North-Up deviations for MM RUT in presence threat signature 02. Figure 5. East-North-Up deviations for MM RUT in presence threat signature 02. The Figure TTRP 5. tests East-North-Up showed that deviations mass-market for MM RUT RUT in is presence capable threat recovering signature 02. from a strong interference event almost immediately. The TTRP value is, in fact, 1 s for four tested interference signatures, and 4 s in case a wide swept frequency signal with a fast repeat rate (i.e., test case MM01) Pressional Grade Receiver Similar to MM receiver, consequences interference presence on PRO grade RUT are degradation in signal quality and hence position accuracy, which worsen as interference level increases, until RUT loses its position fix. An example such interference impact on PRO grade RUT is given in Figures 6 and 7, which show average C/N 0 tracked satellites and East-North-Up deviations position solution for test case PRO02-STATIC-SENSITIVITY (threat signature 02: multiple narrowband interfering signals), respectively.

10 Sensors 2019, 19, x FOR PEER REVIEW Sensors 2019, 19, Sensors 2019, 19, x FOR PEER REVIEW Figure 6. Average C/N0 tracked satellites for PRO RUT in presence threat signature 02. Figure Figure Average Average C/N C/N0 0 tracked tracked satellites satellites for for PRO PRO RUT RUT in in presence presence threat threat signature signature Figure 7. East-North-Up deviations for PRO RUT in presence threat signature 02. Figure 7. East-North-Up deviations for PRO RUT in presence threat signature 02. Additionally, Figure 7. East-North-Up in this case, due deviations to for cold PRO start RUT and in presence resulting threat unavailability signature 02. ionospheric parameters Additionally, and in to this convergence case, due to cold navigation start and filter, resulting pressional unavailability grade RUT ionospheric fers an parameters inaccurateand Additionally, position to in solution convergence this case, indue to beginning, navigation cold especially filter, start and in resulting vertical pressional unavailability component. grade RUT fers an ionospheric inaccurate parameters Table position and 5 summarizes solution to convergence in results beginning, for especially navigation PRO gradein filter, RUT. vertical It can pressional be component. seen that grade bothrut J/S PVT_lost fers and Table 5 summarizes results for PRO grade RUT. It can be seen that both J/SPVT_lost inaccurate J/S and PVT_reobtained position are solution in range in beginning, db and especially db, in respectively. vertical component. Consequently, position fix J/SPVT_reobtained availabilityare in range db and db, respectively. Consequently, position fix Table 5 summarizes for pressional results grade for RUT PRO in grade presence RUT. It can five be interference seen that both threats J/SPVT_lost is between and availability for pressional grade RUT in presence five interference threats is between J/SPVT_reobtained ~61% and ~66%. are in The TTRP range tests showed db and that35 45 pressional db, respectively. grade RUT Consequently, takes from 6 to 10 position s to recover fix ~61% and ~66%. The TTRP tests showed that pressional grade RUT takes from 6 to 10 s to availability from a strong for interference pressional event. grade RUT in presence five interference threats is between recover from a strong interference event. ~61% and ~66%. The TTRP tests showed that pressional grade RUT takes from 6 to 10 s to recover from a strong interference event.

11 Sensors 2019, 19, Test Case Maximum Horizontal Position Error (m) Table 5. Test results for PRO RUT. Maximum Vertical Position Error (m) Position Fix Availability J/S PVT_lost (db) J/S PVT_reobtained (db) Baseline % TTRP * (s) PRO % PRO % PRO % PRO % PRO % * TTRP values are otained with TTRP test cases and or matrices (e.g., maximum horizontal position error, maximum vertical position error, etc.) are obtained eir in baseline test case or in different sensitivity test cases, as represented by test id Comparison Between Mass Market and Pressional Grade RUT with Default C/N 0 Masking Settings From comparison between Tables 4 and 5, it can be observed that availability position fix during interference interval does not differ much between mass market and pressional grade receivers. This is partly due to fact that C/N 0 mask was set to 25 db-hz for both receivers PVT computation. In order to properly compare behavior two categories RUT, additional tests with default C/N 0 mask settings were carried out. In particular, sensitivity tests were performed for mass market and pressional receiver in presence threat signature 03, triangular chirp swept frequency interference signal (MM03 and PRO03). Figure 8 shows ENU (East-North-Up) deviations position solution for MM and PRO receivers. As it can be seen from figure, behaviour two receiver categories differs significantly when default settings C/N 0 are used. In particular, MM RUT prioritizes availability position solution over its accuracy. During interference interval, re are only a few epochs at which receiver does not yield a position solution, but this high yield comes with degraded positioning accuracy. On or hand, PRO RUT prioritizes accuracy over availability. It does not fer position solution as ten during interference interval, but when it does position errors are small. The difference between mass market and pressional grade receivers behavior is also visible in Figure 9, which shows drop in average C/N 0 satellites used in position fix with respect to baseline for entire duration test. While MM RUT continues to use very low quality signals in order to provide a position solution, even if inaccurate, for as long as possible, PRO RUT stops computing solution when signal quality decreases by about 20 db. A summary results tests with default C/N 0 settings is given in Table 6. The maximum horizontal and vertical errors are computed for interval in which interference is present and receiver also fers a position fix. As already discussed, position fix availability during interference interval for mass-market receiver is high (97.91%) at expense position accuracy. On or hand, pressional grade RUT preserves position accuracy at expense solution availability (58.58%). The maximum horizontal and vertical errors in test case are only slightly larger than in baseline case. It is important to recall here that availability is computed only for duration when interference is active. It can be observed from Table 6 that, when manufacturer s default receiver settings are used, mass market RUT has much higher sensitivity as compared to pressional grade RUT. The former is able to withstand interference event through entire rising ramp interference power prile and for most falling ramp duration. The position fix is lost after interference power peak has been reached (precisely, at J/S = 30 db) and it is regained after a few seconds. The behavior is different for PRO RUT which instead stops fering a position solution as soon as interference reaches a power level such that J/S is 45 db.

12 Sensors 2019, 19, Sensors 2019, 19, x FOR PEER REVIEW (a) (b) Figure 8. East-North-Up deviations for (a) MM RUT and (b) PRO RUT in presence threat signature 03 when C/N0 0 default settingsare are usedin in both receivers. A summary results tests with default C/N0 settings is given in Table 6. The maximum horizontal and vertical errors are computed for interval in which interference is present and receiver also fers a position fix. As already discussed, position fix availability during interference interval for mass-market receiver is high (97.91%) at expense position accuracy. On or hand, pressional grade RUT preserves position accuracy at expense solution availability (58.58%). The maximum horizontal and vertical errors in test case are only slightly larger than in baseline case. It is important to recall here that availability is computed only for duration when interference is active. It can be observed from Table 6 that, when manufacturer s default receiver settings are used, mass market RUT has much higher sensitivity as compared to pressional grade RUT. The former is able to withstand interference event through entire rising ramp interference power prile and for most falling ramp duration. The position fix is lost after interference power peak has been reached (precisely, at J/S = 30 db) and it is regained after a few seconds. The behavior is different for PRO RUT which

13 Sensors 2019, 19, x FOR PEER REVIEW instead stops fering a position solution as soon as interference reaches a power level such that J/S is 45 db. Sensors 2019, 19, (a) (b) Figure9. 9. Dropin in averagec/n0 for 0 for (a) (a) MM MMRUT RUTand and (b) (b) PRO PRORUT RUTin in presence threat signature whenc/n0 0 default settings are are used used in in receivers. Table 6. Test results for MM and PRO RUT when default C/N 0 settings are used. Mass market RUT Pressional grade RUT Maximum Horizontal Position Error (m) Maximum Vertical Position Error (m) Position Fix Availability Test Case Baseline Test Case Baseline Test Case Baseline % 100% % 100% J/S PVT_lost (db) 30 (on falling ramp) 45 (on rising ramp)

14 Sensors 2019, 19, Syntic vs. Real-World Signature Impact Analysis The results presented in previous sections were obtained by generating interference through replay raw I/Q sample data captured in field by STRIKE3 monitoring sites. In addition, a set tests was also conducted using a second approach for threat generation; creating syntic I/Q data with properties that are representative real signals and replaying m with VSG. The advantage is that generated signal is free multipath and GNSS PRN codes, hence it is cleaner than one recorded in field. However, this approach has limitations when original signal is complex and difficult to re-create syntically, such as in case chirp tick signal, as resultant interference signal may not accurately reflect real one and, refore, impact on receiver performance may be different. Sensitivity testing with syntic interference signals was performed only for MM RUT. In particular, following four, out five threat signatures described in Table 3, were possible to be re-created via VSG: Wide swept frequency with fast repeat rate, multiple narrow band signals, triangular, and triangular wave swept frequency. The results are summarized in Table 7, which also includes results obtained with live interference data recorded in field. For each performance metric, column with label S contains results for syntic signature, whereas column with label R contains results for recorded signature. Table 7. Test results for MM RUT when with syntic and recorded interference signals are used. Maximum Horizontal Position Error (m) Maximum Vertical Position Error (m) Position Fix Availability J/S PVT_lost (db) J/S PVT_reobtained (db) S R S R S R S R S R MM % 65.07% MM % 67.96% MM % 60.44% MM % 61.47% As it can be observed from Table 7, syntic signatures have a stronger impact on RUT. This is shown by reduced availability position fix due to longer time needed by receiver to recover from interfering event. J/S PVT_reobtained takes values in range db when recorded interference is used, while it is in range 0 25 db in case syntic threat signatures. In one test case (i.e., triangular wave swept frequency signal, MM04), RUT is capable reobtaining PVT solution only after interference has been turned f. As an example, difference in impact to receiver between recorded and syntic signature can be seen in Figure 10, which shows East-North-Up deviations receiver s solution from true position in test case MM04, using recorded interference and syntic interference. It is observed that epochs in which RUT is not capable providing a position fix is significantly higher when syntic interference is used. This could be due to fact that syntic signatures are free multipath and GNSS PRN codes and hence are much cleaner than recorded ones. This is indeed an empirical conclusion from perceived tests, which would require furr investigation in this direction. It would always remain a challenge to reproduce a real-world detected interference event, since reproduction signal will always be based on digitized raw In-phase Quad-phase (IQ) samples in presence a real GNSS signal. On or hand, reproduction a syntically generated signal does not have any GNSS signature in it, as samples are taken from real jamming device and n reproduced via VSG.

15 recorded interference is used, while it is in range 0 25 db in case syntic threat signatures. In one test case (i.e., triangular wave swept frequency signal, MM04), RUT is capable reobtaining PVT solution only after interference has been turned f. As an example, difference in impact to receiver between recorded and syntic signature can be seen in Figure 10, which shows East-North-Up deviations receiver s solution from true position in test case MM04, using recorded interference and syntic interference. Sensors 2019, 19, (a) (b) Figure 10. East-North-Up deviations for test case MM04-STATIC-SENSITIVITY using (a) recorded interference and (b) syntic interference Dual Frequency Receiver Testing Dual-frequency tests, where RUT is configured to receive GPS/Galileo signals, in both L1/E1 and L5/E5 frequency bands, were also performed. In particular, a dual-frequency pressional grade receiver was assessed in presence wide swept frequency with fast repeat rate interfering signal (i.e., PRO01). The interference was generated only at L1/E1 carrier frequency. The objective such test was to investigate if RUT could intelligently exploit frequency diversity in presence interference in one frequency bands. Since no information on L5/E5 signals could be retrieved from NMEA files, analysis was conducted using a MATLAB-based script that processes RINEX files from RUT. For each five signals (GPS L1, GPS L5, Galileo E1, Galileo E5b, and Galileo E5a), tracked satellites and signal strength from tracked satellites for each signal were analyzed. The impact interference on L1/E1 band is clearly visible in Figures 11a and 12a, which show C/N 0 values for GPS L1 C/A and Galileo E1b signals, respectively. As expected, as interference power increases, C/N 0 ratio signals in L1/E1 band decreases until, at some point, RUT stops tracking m and it is no longer fering corresponding observations. It is interesting

16 was conducted using a MATLAB-based script that processes RINEX files from RUT. For each five signals (GPS L1, GPS L5, Galileo E1, Galileo E5b, and Galileo E5a), tracked satellites and signal strength from tracked satellites for each signal were analyzed. The impact interference on L1/E1 band is clearly visible in Figures 11a and 12a, which show C/N0 values for GPS L1 C/A and Galileo E1b signals, respectively. As expected, as interference power Sensors 2019, 19, increases, C/N0 ratio signals in L1/E1 band decreases until, at some point, RUT stops tracking m and it is no longer fering corresponding observations. It is interesting to find out tothat find out signals that in signals L5/E5 in band L5/E5 are also band affected, are alsoeven affected, though even though frequencies frequencies are far apart arefrom far apart interfering from interfering band (i.e., band Figures (i.e., 11b Figures and 12b). 11b and 12b). However, it it can can also also be be observed that that receiver does does not not generate L5/E5 observations for for those epochs in in which no no L1/E1 observation is is generated. It can It can be be seen seen from from both both Figures Figures 11 and 11 and 12 that 12 that data data gaps gaps in in test case test for case both for both frequencies frequencies were were identical. identical. No observation No observation is generated is generated for GPS for L5 GPS satellites L5 satellites as soonas assoon L1as signals L1 are signals so degraded are so degraded that RUT that cannot RUT trackcannot m, track even though m, even L5 though signal quality L5 signal is still quality good. The is still same good. happens The same withhappens Galileo. with This Galileo. shows that This shows RUTthat was notrut yet able was tonot exploit yet able to frequency exploit diversity frequency to withstand diversity to withstand interference. interference. Sensors 2019, 19, x FOR PEER REVIEW (a) (b) Figure 11. C/N0 (a) GPS L1, and (b) GPS L5 tracked satellites. Figure 11. C/N 0 (a) GPS L1, and (b) GPS L5 tracked satellites.

17 (b) Sensors 2019, 19, 1276 Figure 11. C/N0 (a) GPS L1, and (b) GPS L5 tracked satellites (a) 7. Conclusions (b) Figure C/N0 0 (a) Galileo E1b, and (b) Galileo E5b tracked satellites. The results standardized tests for two categories receivers were presented. The results were analyzed in terms well-defined receiver key performance indicators. The result analysis showed that both receiver categories were impacted ominously by interference threats. More specifically, it was observed that mass-market RUT was capable recovering from a high-powered jamming event much faster than pressional grade RUT. Moreover, MM RUT was capable withstanding a power-varying interference event longer than PRO RUT, resulting in a much higher availability for MM RUT than that a PRO RUT. On contrary, in terms performance accuracy, PRO RUT is instead performing better than MM RUT. However, when a C/N 0 mask 25 db-hz was applied at PVT computation stage for both receiver categories, y tend to behave almost similarly against interference threats, i.e., both receivers exhibit similar availability against same kind interference threat. A performance comparison RUT under different threat signatures, considering methodology interfering signal is generated (i.e., via syntic signal or via real recorded signal), was also presented. The objective this test set up was to investigate impact interference signal generation on RUT s positioning performance. The results showed that interfering

18 Sensors 2019, 19, signals generated syntically had impacted RUT more than real-recorded version same signatures. This could potentially be due to fact that syntic signatures are free from multipath and GNSS PRN codes and, hence, are much cleaner than recorded ones, resulting in a far-reaching impact on RUT s navigation performance. Dual frequency GPS/Galileo L1/E1 and L5/E5 test was also carried out in order to investigate if RUT could intelligently exploit frequency diversity in presence interference in L1/E1 frequency band. It was interesting to experience that L5/E5 signals were also affected to a lesser extent, even though interference was on L1/E1 band. It was noticed that pressional RUT did not generate L5/E5 observations for those epochs in which no L1/E1 observation was generated. Based on this result, it can be stated that RUT was not yet capable exploiting frequency diversity to withstand interference in L1/E1 band. Overall, this paper shows that availability threat signatures and standardized test procedures is critical towards understanding behavior GNSS receivers under most frequently encountered interference threats. This, in turn, should help GNSS community in developing next generation robust receiver technologies and to support wider utilization GNSS in safety and liability-critical high-value applications. Author Contributions: M.Z.H.B. wrote manuscript with help or authors. In addition, M.Z.H.B. was involved in test platform development and wrote MATLAB-based script for analyzing receiver test data. N.G.F. wrote some parts manuscript, developed C++-based script for automation receiver testing, carried out MM receiver testing, and also helped in analyzing test results. A.H. wrote a script to automate PRO receiver testing, carried out all PRO receiver tests, and also helped in analyzing receiver test results. S.T. contributed to introductory part manuscript and he was also involved in test platform development and results analysis. M.P. and M.D. were involved in real-world interference data collection and classification, overall reviewing manuscript, and also fered an expert industrial opinion on subject matter. Funding: This work has been co-funded by EU H2020 project entitled Standardization GNSS Threat reporting and Receiver testing through International Knowledge Exchange, Experimentation and Exploitation (STRIKE3) with grant agreement Acknowledgments: The authors would like to acknowledge STRIKE3 consortium for ir constructive remarks on test campaign and results analysis different receiver testing. Conflicts Interest: The authors declare no conflict interest. References 1. Available online: pdf (accessed on 14 December 2018). 2. Chen, L.; Thombre, S.; Järvinen, K.; Lohan, E.S.; Alen-Savikko, A.; Leppäkoski, H.; Bhuiyan, M.Z.H.; Bu-Pasha, S.; Ferrara, G.N.; Honkala, S.; et al. Robustness, Security and Privacy in Location-Based Services in Future IoT: A Survey. IEEE Access 2017, 5, [CrossRef] 3. Ferrara, N.G.; Bhuiyan, M.Z.H.; Söderholm, S.; Ruotsalainen, L.; Kuusniemi, H. A New Implementation Narrowband Interference Detection, Characterization and Mitigation Technique for a Stware-defined multi-gnss Receiver. GPS Solut. 2018, 22, 106. [CrossRef] 4. Thombre, S.; Bhuiyan, M.Z.H.; Eliardsson, P.; Gabrielsson, B.; Pattinson, M.; Dumville, M.; Fryganiotis, D.; Hill, S.; Manikundalam, V.; Pölöskey, M.; et al. GNSS threat monitoring and reporting: Past, present, and a proposed future. R. Inst. Navig. J. 2018, 71, [CrossRef] 5. STRIKE3 Project. Available online: (accessed on 4 January 2019). 6. Pattinson, M.; Dumville, M.; Fryganiotis, D.; Bhuiyan, M.Z.H.; Thombre, S.; Ferrara, G.; Alexandersson, M.; Axell, E.; Eliardsson, P.; Poloskey, M.; et al. Standardization GNSS Threat reporting and Receiver testing through International Knowledge Exchange, Experimentation and Exploitation [STRIKE3]: Validation Reporting and Testing Standards. In Proceedings European Navigation Conference (ENC), Gonburg, Sweden, May 2018.

19 Sensors 2019, 19, Pattinson, M.; Dumville, M.; Ying, Y.; Fryganiotis, D.; Bhuiyan, M.Z.H.; Thombre, S.; Kuusniemi, H.; Waern, Å.; Eliardsson, P.; Poloskey, M.; et al. Standardization GNSS Threat reporting and Receiver testing through International Knowledge Exchange, Experimentation and Exploitation [STRIKE3]. Eur. J. Navig. 2017, 15, Pattinson, M.; Dumville, M.; Fryganiotis, D.; Bhuiyan, M.Z.H.; Thombre, S.; Ferrara, G.; Alexandersson, M.; Axell, E.; Eliardsson, P.; Poloskey, M.; et al. STRIKE3: Outcomes Long Term Monitoring GNSS Threats and Receiver Testing. In Proceedings 12th Annual Baška GNSS Conference, Baška, Krk Island, Croatia, 6 9 May STRIKE3 project deliverable, D4.2: Draft Standards for Receiver Testing against Threats. Version 2.0. Available online: (accessed on 12 December 2018). 10. Giraud, J.; Mathieu, M.-L.; Pomies, A.; Garrido, J.P.B.; Hernandez, I.F. Pushing standardization GNSS based Location Systems to Support Terrestrial Applications Development. In Proceedings ION GNSS, Nashville, TN, USA, September European Telecommunications Standards Institute (ETSI). GNSS Based Applications and Standardisation Needs, ETSI TR V1.1.1; ETSI: Sophia Antipolis, France, Resilient Navigation and Timing Foundation. Available online: (accessed on 14 December 2018). 13. ICAO, RTCA Seek New Protections for GNSS Aviation Receivers. Available online: icao-rtca-seek-new-protections-for-gnss-aviation-receivers/ (accessed on 4 January 2019). 14. GSG-5/6 Series: Advanced GNSS Simulators. Available online: 56-series-advanced-simulators-datasheet/attachment (accessed on 8 February 2018). 15. Keysight EXG X-Series Signal Generators N5171B Analog & N5172B Vector Datasheet. Available online: (accessed on 8 February 2018). 16. Ferrara, G.; Bhuiyan, M.Z.H.; Amin, H.; Thombre, S.; Pattinson, M. How can we ensure GNSS receivers are robust to real-world interference threats? GNSS solutions column. Inside GNSS, July August by authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under terms and conditions Creative Commons Attribution (CC BY) license (