Click here to Skip to main content
15,115,504 members
Articles / Web Development / HTML
Posted 2 Dec 2013

Tagged as


73 bookmarked

Html Agility Pack - Massive information extraction from WWW pages

Rate me:
Please Sign up or sign in to vote.
4.85/5 (29 votes)
4 Feb 2014CPOL6 min read
What to do if database of over 150,000 records is available only as a list of webpages each holding just 50 records? You can spend a week clicking through it and die of boredom or you can write a scraper that will do the work for you :)

Recently I needed to acquire some database. Unfortunately it was published only as a website that presented 50 records per single page. Whole database had more than 150 thousand records. What to do in such situation? Click through 3000 pages, manually collecting data in a text file? One week and it's done! ;) Better to write a program (so called scraper) which will do the work for you. The program has to do three things:

  • generated a list of addresses from which data should be collected;
  • visit pages sequentially and extract information from HTML code;
  • dump data to local database and log work progress.

Address generation should be quite easy. For most sites pagination is built with plain links in which page number is clearly visible in the main part of URL ( or in the query string ( If pagination is done via AJAX calls situation is a bit more complex, but let's not bother with that in this post... When you know the pattern for page number parameter, all it's needed is a simple loop with something like:

string url = string.Format("{0}", pageNumber)

Now it's time for something more interesting. How to extract data from a webpage? You can use WebRequest/WebResponse or WebClient classes from System.Net namespace to get page content. After that you can obtain information via regular expressions. You can also try to treat downloaded content as XML and scrutinize it with XPath or LINQ to XML. These are not good approaches, however. For complicated page structure writing correct expression might be difficult, one should also remember that in most cases webpages are not valid XML documents. Fortunately, HTML Agility Pack library was created. It allows convenient parsing of HTML pages, even these with malformed code (i.e., lacking proper closing tags). HAP goes through page content and builds document object model that can be later processed with LINQ to Objects or XPath.

To start working with HAP you should install NuGet package named HtmlAgilityPack (I was using version 1.4.6) and import namespace with the same name. If you don't want to use NuGet (why?) download zip file from project's website and add reference to HtmlAgilityPack.dll file suitable for your platform (zip contains separate versions for .NET 4.5 and Silverlight 5 for example). Documentation in .chm file might be useful too. Attention! When I opened downloaded file (in Windows 7), the documentation looked empty. "Unlock" option from file's properties screen helped to solve the problem.

Retrieving webpage content with HAP is very easy. You have to create HtmlWeb object and use its Load method with page address:

HtmlWeb htmlWeb = new HtmlWeb();
HtmlDocument htmlDocument = htmlWeb.Load("");

In return, you will receive object of HtmlDocument class which is the core of HAP library.

HtmlWeb contains a bunch of properties that control how document is retrieved. For example, it is possible to indicate whether cookies should be used (UseCookies) and what should be the value of User Agent header included in HTTP request (UserAgent). For me AutoDetectEncoding and OverrideEncoding properties were especially useful as they let me correctly read document with Polish characters.

HtmlWeb htmlWeb = new HtmlWeb() { AutoDetectEncoding = false, OverrideEncoding = Encoding.GetEncoding("iso-8859-2") };

StatusCode (type System.Net.HttpStatusCode) is another very useful property of HttpWeb. With it you can check the result of latest request processing.

Having HtmlDocument object ready, you can start to extract data. Here's an example of how to obtain links addresses and texts from previously downloaded webpage (add using System.Linq):

IEnumerable<HtmlNode> links = htmlDocument.DocumentNode.Descendants("a").Where(x => x.Attributes.Contains("href"));
foreach (var link in links)
    Console.WriteLine(string.Format("Link href={0}, link text={1}", link.Attributes["href"].Value, link.InnerText));       

Property DocumentNode of type HtmlNode points to page's root. Method Descendants is used to retrieve all links (a tag) that contain href attribute. After that texts and address are printed on the console. Quite easy, huh? Few other examples:

Getting HTML code of the whole page:

string html = htmlDocument.DocumentNode.OuterHtml;

Getting element with "footer" id

HtmlNode footer = htmlDocument.DocumentNode.Descendants().SingleOrDefault(x => x.Id == "footer"); 

Getting children of div with "toc" id and displaying names of child nodes which have type different than Text:

IEnumerable<HtmlNode> tocChildren = htmlDocument.DocumentNode.Descendants().Single(x => x.Id == "toc").ChildNodes;
foreach (HtmlNode child in tocChildren)
    if (child.NodeType != HtmlNodeType.Text)

Getting list elements (li tag) that have toclevel-1 class:

IEnumerable<HtmlNode> tocLiLevel1 = htmlDocument.DocumentNode.Descendants()
    .Where(x => x.Name == "li" && x.Attributes.Contains("class")
    && x.Attributes["class"].Value.Split().Contains("toclevel-1"));

Notice that Where filter is quite complex. Simple condition:

Where(x => x.Name == "li" && x.Attributes["class"].Value == "toclevel-1")

is not correct! Firstly there is no guarantee that each li tag will have class attribute set so we need to check if attribute exist to avoid NullReferenceException exception. Secondly the check for toclevel-1 is flawed. HTML element might have many classes, so instead of using == it's worthwhile to use Contains(). Plain Value.Contains is not enough though. What if we are looking for "sec" class and element has "secret" class? Such element will be matched too! Rather than Value.Contains you should use Value.Split().Contains. This way an array of strings will be checked via equals operator (instead of searching a single string for substring).

Getting texts of all li elements which are nested in minimum one li element:

var h1Texts = from node in htmlDocument.DocumentNode.Descendants()
              where node.Name == "li" && node.Ancestors("li").Count() > 0
              select node.InnerText;

Beyond LINQ to Objects, XPath might also be used to extract information. For example:

Getting a tags that have href attribute value starting with # and longer than 15 characters:

IEnumerable<HtmlNode> links = htmlDocument.DocumentNode.SelectNodes("//a[starts-with(@href, '#') and string-length(@href) > 15]");

Finding li elements inside div with id "toc" which are third child in their parent element:

IEnumerable<HtmlNode> listItems = htmlDocument.DocumentNode.SelectNodes("//div[@id='toc']//li[3]");

XPath is a complex tool and it's impossible to show all its great capabilities in this post...

HAP lets you explore page structure and content but it also allows page modification and save. It has helper methods good for detecting document encoding (DetectEncoding), removing HTML entities (DeEntitize) and more... It is also possible to gather validation information (i.e. check if original document had proper closing tags). These topics are beyond the scope of this post.

While processing consecutive pages, dump useful information to local database most suitable for your needs. Maybe .csv file will be enough for you, maybe SQL database will be required? For me plain text file was sufficient.

Last thing worth doing is ensuring that scraper properly logs information about its work progress (for sure you want to know how far your program went and if it encountered any errors). For logging it is best to use specialized library such as log4net. There's a lot of tutorials on how to use log4net so I will not write about it. But I will show you a sample configuration which you can use in console application:

<?xml version="1.0" encoding="utf-8" ?>
        <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/>          
            <level value="DEBUG"/>            
            <appender-ref ref="ConsoleAppender" />
            <appender-ref ref="RollingFileAppender"/>
        <appender name="ConsoleAppender" type="log4net.Appender.ColoredConsoleAppender">
            <layout type="log4net.Layout.PatternLayout">
                <conversionPattern value="%date{ISO8601} %level [%thread] %logger - %message%newline" />
                <level value="ERROR" />
                <foreColor value="White" />
                <backColor value="Red" />
            <filter type="log4net.Filter.LevelRangeFilter">
                <levelMin value="INFO" />                
        <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender">
            <file value="Log.txt" />
            <appendToFile value="true" />
            <rollingStyle value="Size" />
            <maxSizeRollBackups value="10" />
            <maximumFileSize value="50MB" />
            <staticLogFileName value="true" />
            <layout type="log4net.Layout.PatternLayout">
                <conversionPattern value="%date{ISO8601} %level [%thread] %logger - %message%newline%exception" />

Above config contains two appenders: ConsoleAppender and RollingFileAppender. The first logs text to console window, ensuring that errors are clearly distinguished by color. To reduce amount of information LevelRangeFilter is set so only entries with INFO or higher level are presented. The second appender logs to text file (even entries with DEBUG level go there). Maximum size of singe file is set to 50MB and total files number limit is set to 10. Current log is always in Log.txt file...

And that's all, scraper is ready! Run it and let it labor for you. No dull, long hour work - leave it for people who don't know how to program :)

Additionally you can try a little exercise: instead of creating a list of all pages to visit, determine only the first page and find a link to next page in currently processed one...

P.S.: Keep in mind that HAP works on HTML code that was sent by the server (this code is used by HAP to build document model). DOM which you can observe in browser's developer tools is often a result of scripts execution and might differ greatly form the one build directly from HTTP response.

  • Update 08.12.2013: As requested, I created simple demo (Visual Studio 2010 solution) of how to use Html Agility Pack and log4net. The app extracts some links from wiki page and dumps them to text file. Wiki page is saved to htm file to avoid dependency on web resource that might change. Download.  
  • Update 05.12.2013: Code samples with selecting by id use Single instead of Where+First. It's a good practice to use Single method if you want to get exactly one element.  


This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


About the Author

Comments and Discussions

QuestionIs this multi threading ready? Pin
Member 422724813-May-18 20:45
MemberMember 422724813-May-18 20:45 
QuestionQuestion about the legality of this Pin
andegre5-Feb-14 8:45
Memberandegre5-Feb-14 8:45 
I'm about 1 or 2 more nights away from finishing my own web/screen-scraping project. All it is is sports statistics gathering from a public website.

1) First of all, this is 100% legal, right? (From what I've read, yes it is, but would like some more confirmation from people that have actually done this in a production setting).
2) If the website owner starts seeing this continued traffic from someone (aka me) because they have Google Analytics, what's the likelihood that they block me? Or do they "want" that traffic since it's more page hits, and may get them extra money in advertisements?
3) Lastly, is this something that people normally run from the guts of a website (since I'm using the data to do different things on my own website), or should this be done via a batch job, ie non-website code?

AnswerRe: Question about the legality of this Pin
Philip.F7-Apr-14 23:59
MemberPhilip.F7-Apr-14 23:59 
GeneralRe: Question about the legality of this Pin
CODEPIT30-Sep-14 19:45
MemberCODEPIT30-Sep-14 19:45 
GeneralScraping Framework Pin
Deutschie4-Feb-14 22:55
MemberDeutschie4-Feb-14 22:55 
Questiongreat help to start with HTML Agility Pack Pin
dontumindit4-Feb-14 9:59
Memberdontumindit4-Feb-14 9:59 
GeneralHave a look at... Pin
Mike-MadBadger5-Dec-13 13:06
MemberMike-MadBadger5-Dec-13 13:06 
Questioncan you post the source code Pin
dyma2-Dec-13 21:28
Memberdyma2-Dec-13 21:28 
AnswerRe: can you post the source code Pin
morzel3-Dec-13 1:28
Membermorzel3-Dec-13 1:28 
GeneralRe: can you post the source code Pin
dyma3-Dec-13 2:47
Memberdyma3-Dec-13 2:47 
GeneralRe: can you post the source code Pin
fredatcodeproject6-Dec-13 5:48
professionalfredatcodeproject6-Dec-13 5:48 
AnswerRe: can you post the source code Pin
morzel8-Dec-13 21:47
Membermorzel8-Dec-13 21:47 
GeneralRe: can you post the source code Pin
dyma8-Dec-13 23:21
Memberdyma8-Dec-13 23:21 
GeneralRe: can you post the source code Pin
mhn2179-Dec-13 2:32
Membermhn2179-Dec-13 2:32 
GeneralRe: can you post the source code Pin
morzel9-Dec-13 3:45
Membermorzel9-Dec-13 3:45 
AnswerRe: can you post the source code Pin
dontumindit4-Feb-14 10:00
Memberdontumindit4-Feb-14 10:00 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.