CS6322: Information Retrieval Sanda Harabagiu. Lecture 11: Crawling and web indexes

Similar documents
20 Web crawling and indexes

So today we shall continue our discussion on the search engines and web crawlers. (Refer Slide Time: 01:02)

IRLbot: Scaling to 6 Billion Pages and Beyond

Corso di Biblioteche Digitali

Analysis of Web Archives. Vinay Goel Senior Data Engineer

Optimization of Distributed Crawler under Hadoop

Frontera: open source, large scale web crawling framework. Alexander Sibiryakov, October 1, 2015

Cluster Computing. ! Fault tolerance. ! Stateless. ! Throughput. ! Stateful. ! Response time. Architectures. Stateless vs. Stateful.

AN EXTENDED MODEL FOR EFFECTIVE MIGRATING PARALLEL WEB CRAWLING WITH DOMAIN SPECIFIC AND INCREMENTAL CRAWLING

Lecture 10: HBase! Claudia Hauff (Web Information Systems)!

Big Data Technology Map-Reduce Motivation: Indexing in Search Engines

Lecture 3: Scaling by Load Balancing 1. Comments on reviews i. 2. Topic 1: Scalability a. QUESTION: What are problems? i. These papers look at

Scalable Internet Services and Load Balancing

Scalable Internet Services and Load Balancing

Big Data Technology Motivating NoSQL Databases: Computing Page Importance Metrics at Crawl Time

Basheer Al-Duwairi Jordan University of Science & Technology

Naming vs. Locating Entities

CS 2112 Spring Instructions. Assignment 3 Data Structures and Web Filtering. 0.1 Grading. 0.2 Partners. 0.3 Restrictions

1. Comments on reviews a. Need to avoid just summarizing web page asks you for:

CS3250 Distributed Systems

Building a Peer-to-Peer, domain specific web crawler

Concept of Cache in web proxies

The Sierra Clustered Database Engine, the technology at the heart of

F-Secure Internet Security 2014 Data Transfer Declaration

Introduction to Network Operating Systems

Communicating access and usage policies to crawlers using extensions to the Robots Exclusion Protocol Part 1: Extension of robots.

Outline. Definition. Name spaces Name resolution Example: The Domain Name System Example: X.500, LDAP. Names, Identifiers and Addresses

Index Terms Domain name, Firewall, Packet, Phishing, URL.

Understanding Slow Start

Simple Solution for a Location Service. Naming vs. Locating Entities. Forwarding Pointers (2) Forwarding Pointers (1)

Part 1: Link Analysis & Page Rank

How Comcast Built An Open Source Content Delivery Network National Engineering & Technical Operations

Distributed Computing and Big Data: Hadoop and MapReduce

Big Data Systems CS 5965/6965 FALL 2014

EECS 750: Advanced Operating Systems. 01/28 /2015 Heechul Yun

SOLARWINDS ENGINEER S TOOLSET FAST FIXES TO NETWORK ISSUES

Cleaning Encrypted Traffic

Dr. Anuradha et al. / International Journal on Computer Science and Engineering (IJCSE)

W3Perl A free logfile analyzer

Framework for Intelligent Crawler Engine on IaaS Cloud Service Model

Integrating VoltDB with Hadoop

A COOL AND PRACTICAL ALTERNATIVE TO TRADITIONAL HASH TABLES

InfiniteGraph: The Distributed Graph Database

In-Memory Databases Algorithms and Data Structures on Modern Hardware. Martin Faust David Schwalb Jens Krüger Jürgen Müller

Global Server Load Balancing

Original-page small file oriented EXT3 file storage system

CS514: Intermediate Course in Computer Systems

1. The memory address of the first element of an array is called A. floor address B. foundation addressc. first address D.

Module 15: Network Structures

Network Management and Monitoring Software

On demand synchronization and load distribution for database grid-based Web applications

Motivation. Domain Name System (DNS) Flat Namespace. Hierarchical Namespace

Scalability of web applications. CSCI 470: Web Science Keith Vertanen

Configuring Nex-Gen Web Load Balancer

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

Midterm. Name: Andrew user id:

Measuring IP Performance. Geoff Huston Telstra

Visualizations and Correlations in Troubleshooting

Security vulnerabilities in the Internet and possible solutions

XML Sitemap Plus Generator & Splitter Extension for Magento 2R User Guide

Mercator: A Scalable, Extensible Web Crawler

File Management. Chapter 12

Decoding DNS data. Using DNS traffic analysis to identify cyber security threats, server misconfigurations and software bugs

Agenda. Distributed System Structures. Why Distributed Systems? Motivation

Chapter-1 : Introduction 1 CHAPTER - 1. Introduction

Graph Database Proof of Concept Report

Graph Theory and Complex Networks: An Introduction. Chapter 08: Computer networks

Krishna Institute of Engineering & Technology, Ghaziabad Department of Computer Application MCA-213 : DATA STRUCTURES USING C

Operating Systems CSE 410, Spring File Management. Stephen Wagner Michigan State University

DNS LOOKUP SYSTEM DATA STRUCTURES AND ALGORITHMS PROJECT REPORT

A1 and FARM scalable graph database on top of a transactional memory layer

1 Organization of Operating Systems

Guideline for stresstest Page 1 of 6. Stress test

AFDX networks. Computers and Real-Time Group, University of Cantabria

Spam Detection Using Customized SimHash Function

Integrity Checking and Monitoring of Files on the CASTOR Disk Servers

Job Reference Guide. SLAMD Distributed Load Generation Engine. Version 1.8.2

Computer Networks - CS132/EECS148 - Spring

Physical Data Organization

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 4, July-Aug 2014

Routing in packet-switching networks

How To Understand The Power Of A Content Delivery Network (Cdn)

Chapter 14: Distributed Operating Systems

Smart Queue Scheduling for QoS Spring 2001 Final Report

A Deduplication File System & Course Review

Application Design and Development

How to speed up IDENTIKEY DNS lookup of the Windows Logon DAWL client on Windows 7?

2 TCP-like Design. Answer

Binary search tree with SIMD bandwidth optimization using SSE

Quality of Service (QoS)) in IP networks

GLOBAL SERVER LOAD BALANCING WITH SERVERIRON

IMPROVING PERFORMANCE OF RANDOMIZED SIGNATURE SORT USING HASHING AND BITWISE OPERATORS

Transcription:

Sanda Harabagiu Lecture 11: Crawling and web indexes

Today s lecture Crawling Connectivity servers

Sec. 20.2 Basic crawler operation Begin with known seed URLs Fetch and parse them Extract URLs they point to Place the extracted URLs on a queue Fetch each URL on the queue and repeat

Sec. 20.2 Crawling picture URLs crawled and parsed Unseen Web Web Seed pages URLs frontier

Sec. 20.1.1 Simple picture complications Web crawling isn t feasible with one machine All of the above steps distributed Malicious pages Spam pages Spider traps incl dynamically generated Even non-malicious pages pose challenges Latency/bandwidth to remote servers vary Webmasters stipulations How deep should you crawl a site s URL hierarchy? Site mirrors and duplicate pages Politeness don t hit a server too often

Sec. 20.1.1 What any crawler mustdo Be Polite: Respect implicit and explicit politeness considerations Only crawl allowed pages Respect robots.txt(more on this shortly) Be Robust: Be immune to spider traps and other malicious behavior from web servers

Sec. 20.1.1 What any crawler shoulddo Be capable of distributedoperation: designed to run on multiple distributed machines Be scalable: designed to increase the crawl rate by adding more machines Performance/efficiency: permit full use of available processing and network resources

Sec. 20.1.1 What any crawler shoulddo Fetch pages of higher quality first Continuousoperation: Continue fetching fresh copies of a previously fetched page Extensible: Adapt to new data formats, protocols

Sec. 20.1.1 Updated crawling picture URLs crawled and parsed Unseen Web Seed Pages URL frontier Crawling thread

Sec. 20.2 URL frontier Can include multiple pages from the same host Must avoid trying to fetch them all at the same time Must try to keep all crawling threads busy

Sec. 20.2 Explicit and implicit politeness Explicit politeness: specifications from webmasters on what portions of site can be crawled robots.txt Implicit politeness: even with no specification, avoid hitting any site too often

Sec. 20.2.1 Robots.txt Protocol for giving spiders ( robots ) limited access to a website, originally from 1994 www.robotstxt.org/wc/norobots.html Website announces its request on what can(not) be crawled For a URL, create a file URL/robots.txt This file specifies access restrictions

Sec. 20.2.1 Robots.txtexample No robot should visit any URL starting with "/yoursite/temp/", except the robot called searchengine": User-agent: * Disallow: /yoursite/temp/ User-agent: searchengine Disallow:

Sec. 20.2.1 Processing steps in crawling Pick a URL from the frontier Fetch the document at the URL Parse the URL Extract links from it to other docs (URLs) Check if URL has content already seen If not, add to indexes For each extracted URL Which one? E.g., only crawl.edu, obey robots.txt, etc. Ensure it passes certain URL filter tests Check if it is already in the frontier (duplicate URL elimination)

Sec. 20.2.1 Basic crawl architecture DNS Doc FP s robots filters URL set WWW Fetch Parse Content seen? URL filter Dup URL elim URL Frontier

Sec. 20.2.2 DNS (Domain Name Server) A lookup service on the internet Given a URL, retrieve its IP address Service provided by a distributed set of servers thus, lookup latencies can be high (even seconds) Common OS implementations of DNS lookup are blocking: only one outstanding request at a time Solutions DNS caching Batch DNS resolver collects requests and sends them out together

Sec. 20.2.1 Parsing: URL normalization When a fetched document is parsed, some of the extracted links are relative URLs E.g., at http://en.wikipedia.org/wiki/main_page we have a relative link to /wiki/wikipedia:general_disclaimerwhich is the same as the absolute URL http://en.wikipedia.org/wiki/wikipedia:general_disclaimer During parsing, must normalize (expand) such relative URLs

Sec. 20.2.1 Content seen? Duplication is widespread on the web If the page just fetched is already in the index, do not further process it This is verified using document fingerprints or shingles

Sec. 20.2.1 Filters and robots.txt Filters regular expressions for URL s to be crawled/not Once a robots.txtfile is fetched from a site, need not fetch it repeatedly Doing so burns bandwidth, hits web server Cache robots.txt files

Sec. 20.2.1 Duplicate URL elimination For a non-continuous (one-shot) crawl, test to see if an extracted+filteredurl has already been passed to the frontier For a continuous crawl see details of frontier implementation

Sec. 20.2.1 Distributing the crawler Run multiple crawl threads, under different processes potentially at different nodes Geographically distributed nodes Partition hosts being crawled into nodes Hash used for partition How do these nodes communicate?

Sec. 20.2.1 Communication between nodes The output of the URL filter at each node is sent to the Duplicate URL Eliminator at all nodes DNS Doc FP s robots filters To other hosts URL set WWW Fetch Parse Content seen? URL Frontier URL filter Host splitter From other hosts Dup URL elim

Sec. 20.2.3 URL frontier: two main considerations Politeness: do not hit a web server too frequently Freshness: crawl some pages more often than others E.g., pages (such as News sites) whose content changes often These goals may conflict each other. (E.g., simple priority queue fails many links out of a page go to its own site, creating a burst of accesses to that site.)

Sec. 20.2.3 Politeness challenges Even if we restrict only one thread to fetch from a host, can hit it repeatedly Common heuristic: insert time gap between successive requests to a host that is >> time for most recent fetch from that host

Sec. 20.2.3 URL frontier: Mercator scheme URLs Prioritizer K front queues Biased front queue selector Back queue router B back queues Single host on each Back queue selector Crawl thread requesting URL

Sec. 20.2.3 Mercator URL frontier URLs flow in from the top into the frontier Front queues manage prioritization Back queues enforce politeness Each queue is FIFO

Sec. 20.2.3 Front queues Prioritizer 1 K Biased front queue selector Back queue router

Sec. 20.2.3 Front queues Prioritizerassigns to URL an integer priority between 1 and K Appends URL to corresponding queue Heuristics for assigning priority Refresh rate sampled from previous crawls Application-specific (e.g., crawl news sites more often )

Sec. 20.2.3 Biased front queue selector When a back queuerequests a URL (in a sequence to be described): picks a front queue from which to pull a URL This choice can be round robin biased to queues of higher priority, or some more sophisticated variant Can be randomized

Sec. 20.2.3 Back queues Biased front queue selector Back queue router 1 B Back queue selector Heap

Sec. 20.2.3 Back queue invariants Each back queue is kept non-empty while the crawl is in progress Each back queue only contains URLs from a single host Maintain a table from hosts to back queues Host name Back queue 3 1 B

Sec. 20.2.3 Back queue heap One entry for each back queue The entry is the earliest time t e at which the host corresponding to the back queue can be hit again This earliest time is determined from Last access to that host Any time buffer heuristic we choose

Sec. 20.2.3 Back queue processing A crawler thread seeking a URL to crawl: Extracts the root of the heap Fetches URL at head of corresponding back queue q (look up from table) Checks if queue qis now empty if so, pulls a URL v from front queues If there s already a back queue for v shost, append vto q and pull another URL from front queues, repeat Else add vto q When qis non-empty, create heap entry for it

Sec. 20.2.3 Number of back queues B Keep all threads busy while respecting politeness Mercator recommendation: three times as many back queues as crawler threads

Connectivity servers

Sec. 20.4 Connectivity Server Support for fast queries on the web graph Which URLs point to a given URL? Which URLs does a given URL point to? Stores mappings in memory from URL to outlinks, URL to inlinks Applications Crawl control Web graph analysis Connectivity, crawl optimization Link analysis

Sec. 20.4 Champion published work Boldiand Vigna In the Proceedings of the WWWeb Conference 2004 Webgraph set of algorithms and a java implementation Fundamental goal maintain node adjacency lists in memory For this, compressing the adjacency lists is the critical component

Sec. 20.4 Adjacency lists The set of neighbors of a node Assume each URL represented by an integer E.g., for a 4 billion page web, need 32 bits per node Naively, this demands 64 bitsto represent each hyperlink

Sec. 20.4 Adjacenylist compression Properties exploited in compression: Similarity (between lists) Locality (many links from a page go to nearby pages) Use gap encodings in sorted lists Distribution of gap values

Sec. 20.4 Storage Boldi/Vignaget down to an average of ~3 bits/link (URL to URL edge) How? Why is this remarkable?

Sec. 20.4 Main ideas of Boldi/Vigna Consider lexicographically ordered list of all URLs, e.g., www.stanford.edu/alchemy www.stanford.edu/biology www.stanford.edu/biology/plant www.stanford.edu/biology/plant/copyright www.stanford.edu/biology/plant/people www.stanford.edu/chemistry

Sec. 20.4 Boldi/Vigna Each of these URLs has an adjacency list Main idea: due to templates, the adjacency list of a node is similarto one of the 7preceding URLs in the lexicographic ordering Express adjacency list in terms of one of these E.g., consider these adjacency lists 1, 2, 4, 8, 16, 32, 64 1, 4, 9, 16, 25, 36, 49, 64 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144 1, 4, 8, 16, 25, 36, 49, 64 Encode as (-2), remove 9, add 8 Why 7?

Sec. 20.4 Gap encodings Given a sorted list of integers x, y, z,, represent by x, y-x, z-y, Compress each integer using a code γ code -Number of bits = 1 + 2 lgx δ code: Information theoretic bound: 1 + lgx bits ζ code: Works well for integers from a power law BoldiVignaDCC 2004

Sec. 20.4 Main advantages of BV Depends only on locality in a canonical ordering Lexicographic ordering works well for the web Adjacency queries can be answered very efficiently To fetch out-neighbors, trace back the chain of prototypes This chain is typically short in practice (since similarity is mostly intra-host) Can also explicitly limit the length of the chain during encoding Easy to implement one-pass algorithm

Resources IIR Chapter 20 Mercator: A scalable, extensible web crawler (Heydonet al. 1999) A standard for robot exclusion The WebGraph framework I: Compression techniques (Boldi et al. 2004) The heritxweb crawler