Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

Screen Scraping with C# for ASP.NET

0.00/5 (No votes)
1 Mar 2002 1  
Using C# to scrape content from a third party site and present in on an ASP.NET webpage

Sample Image - weather.jpg

Introduction

This project is not earth shattering or revolutionary, it is simply a means of coming to terms with ASP.NET and C# development on my part, and to hopefully expose some knowledge and ideas to others.

This project began with the need to create an intranet portal that contained, among other things, the local weather forecast. The design for the forecast information was to be just like a local TV station�s web site. Since I could not use their site, nor was paying for a service to provide the information an option, it was determined that screen scrapping the local TV site would be a good solution. I decided this would be a good introduction the .NET world so I used ASP.NET with C# as the coding language.

WARNING

It should go without saying that screen scraping is not the best solution in many cases. You are completely at the mercy of the third party site, if the layout changes, you must rework you solution. It may also present some legal question as to your rights to use someone else�s work.

Details

The first step in the design was to call up the providing site, http://www.pittsburgh.com/partners/wpxi/weather/ in this case, and look at the HTML to find the information needed. In my case I was able to search for the heading

<B>Current Conditions for Pittsburgh</B>

The weather information was found in two tables so it was just a matter of searching the HTML text and extracting the tables. I could then pass this as the innerHTML content for a table on my webpage.

<TABLE id="Table1" width="100%" border="0">
<TR>
    <TD align="middle" colSpan="2"><STRONG>Local Weather Forecast</STRONG></TD>
</TR>
<TR>
    <TD><%=GetWeather()%></TD>
    <TD><%=GetForecast()%></TD>
</TR>
<TR>
    <TD align="middle" colSpan="2">information provided by WPXI</TD>
</TR>
</TABLE>
        

Aquire the HTML

Using the .NET library it was easy to aqurire the HTML from the site. As can be seen we just need to create a WebResponse object and feed the ResponseStream into a instance of StreamReader. From there I parse through it to remove the empty lines and assign the result to a string. I'm using StringBuilder.Append method as an alternative to appending to the string based on the recommendation of Charles Petzold in Programming Microsoft Windows with C#. Here he demonstrates the using the StringBuilder is 1000x faster than appending to a string.

// Open the requested URL

WebRequest req = WebRequest.Create(strURL);

// Get the stream from the returned web response

StreamReader stream = new StreamReader(req.GetResponse().GetResponseStream());

// Get the stream from the returned web response

System.Text.StringBuilder sb = new System.Text.StringBuilder();
string strLine;
// Read the stream a line at a time and place each one

// into the stringbuilder

while( (strLine = stream.ReadLine()) != null )
{
    // Ignore blank lines

    if(strLine.Length > 0 )
        sb.Append(strLine);
}
// Finished with the stream so close it now

stream.Close();

// Cache the streamed site now so it can be used

// without reconnecting later

m_strSite = sb.ToString();
        

Extract the tables

After the text has been acquired it is simply a matter of extracting and returning the substring. To fix the relative path of the images I run the substring through another method to insert the absolute path before returning it.

private string FindWeatherTable()
{
    int nIndexStart = 0;
    int nIndexEnd = 0;
    int nIndex = 0;

    try
    {
        // This phrase tells us where to start looking for the information

        // If it is found start looking for the first beginning table tag

        if( (nIndex = Find("Current Conditions for Pittsburgh", 0)) > 0 )
        {
            nIndexStart = Find("<TABLE", nIndex);
            if(nIndexStart > 0 )
            {
                // Need to find the second end table tag

                nIndex = Find("</TABLE>", nIndex);
                if(nIndex > 0 )
                {
                    // Add 1 to the index so we don't find the same 

                    // tag as above

                    nIndexEnd = Find("</TABLE>", nIndex+1);
                    if(nIndexEnd > 0 )
                        nIndexEnd += 8; // Include the characters in the tag

                }
            }
        }
        // Extract and return the substring containing the table we want

        // after correcting the img src elements

        return CorrectImgPath(m_strSite.Substring(nIndexStart, 
                              nIndexEnd - nIndexStart));
    }
    catch(Exception e)
    {
        return e.Message;
    }
}

private string CorrectImgPath(string s)
{
    int nIndex = 0;
    try
    {
        // Absolute path to insert

        string strInsert = "http://www.pittsburgh.com";
        // Find any and all images and insert the absolute path

        while( (nIndex = s.IndexOf("/images/", 
                                   nIndex + strInsert.Length + 1)) > 0 )
        {
            s = s.Insert(nIndex, strInsert);
        }
        return s;
    }
    catch(Exception e)
    {
        return e.Message;
    }
}

Conclusion

The complete site used ADO.NET to connect to a SQL Server database and provide the viewer with schedule and appointment information as well as corporate information. They also had the ability to add events to their calendar. For simplicity I choose not include these features in this sample. I just wanted to share a beginner C# and ASP.NET exploration to give others some ideas.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here