CBC news article comments analysis

The CBC (Canadian Broadcasting Corporation) news website articles often have a comments section. It would be interesting to see the interactions between comments and replies, and to understand which person makes the most comments, and frequently used words and phrases.

See the results: https://sitrucp.github.io/cbc_comments/image_grid.html

Comments for a specific CBC opinion article are anaysed in detail below.

See a previous post which details how to obtain comments from CBC news and opinion article. Code for this project can be found in this Github repository.

The opinion article was titled “On COVID restrictions, our governments keep firing up the gaslights and shifting the goalposts“. This article garnered 7,800 comments by 1,226 unique users. The comment and user counts include posts and replies. The comments were posted over a two day period beginning Dec 03, 2021 4:00 AM ET after which the comments were locked.

Referring to the line chart below, one can see that 50% (about 615) of the 1,226 users made 90% of the comments. Only 9% (about 105 users) of the users made 50% of the comments!

The “word cloud” chart below shows the names of the top 200 users by comment and reply count. The name size corresponds to user comment and reply counts.

Of the 7,800 comments 1,744 (22%) were “top-level” comments eg they were not directly replying to another comment. The rest 6,056 (78%) were replies to another comment. This indicates a lot of interaction between comments.

The next series of “network” charts below provide some insight into the  interactions between users, their comments and replies.

The network charts were created by using the Python NetworkX module. The code used create the NetworkX charts is in the another post.

The red circles (“nodes”) are users. The circle size corresponds to user comment counts. The lines (“edges”) connecting the red circles represent interactions between users as replies to comments. The line arrows indicate who was replying to who.

The first chart is a whole view of the 1,140 users that had at least one reply to their comment. It has 1,140 nodes and 6,000 edges so it makes for a very dense visualization and a big image size. Click on the image to open it in your browser where you will be able to zoom into it and download it if you want.

A closer look below shows more detail. The center of the chart has the users with the greatest number of comments and replies. The outer edges show users with fewer comments and replies.

And another closer looks shows even more detail of the sparse low comment and reply count users on the edges of the chart.

This final “word cloud” visualization shows the top 200 words in all of the comments.

CBC news comment and replies interaction network visualization

The CBC (Canadian Broadcasting Corporation) news website articles often have a comments section. It would be interesting to see the interactions between comments and replies, and to understand which person makes the most comments, and frequently used words and phrases.

See the results: https://sitrucp.github.io/cbc_comments/image_grid.html

Another post details a method to retrieve the comments. Comments include a timestamp when it was posted, comment text, and comment user name, and if it is a reply, then name of the comment user being replied to.

This information can be aggregated to get count of posts by comment user name or date/time. It can also be used to learn more about comment user interactions by visualizing the comment and reply user names in a network visualization using the Python NetworkX module. Code used is provided below.

The visualization below illustrates the relationships between 104 comments and replies by comment user for an article “Unvaccinated travellers over the age of 12 barred from planes and trains as of today” (Note comments data fro this visualization were obtained just after the article was posted when it had about 100 comments and replies. Today it has 2000+ comments.)

 

The red circles are “nodes” which represent the comment users. The node size corresponds to the user’s total number of comments or replies. The lines are “edges” and connect nodes. Edges represent reply from one user to another user. The edges have arrows that indicate the direction eg who replied to who.

The edge line widths represent the number of interactions between two nodes. Interactions are comment replies from one person to another (in either direction). The more interactions, the wider the edge line.

Most of the article comments sections that were analysed had one or more prolific commenters (represented by larger size nodes). In addition, there are comment users that have a greater number of replies (represented by edges).

Examples of visualizations provided below. View complete list of CBC comments visualizations here.

Click on the image to view full size as some of them are very big and you will be able to zoom in to get more detailed view.

On COVID restrictions, our governments keep firing up the gaslights and shifting the goalposts

 

In a fiery speech, O’Toole says Canada is ‘drowning in debt and division’ on Trudeau’s watch

 

Trudeau calls for global carbon tax at COP26 summit

 

RCMP union says it supports a Mountie’s ‘right’ to refuse vaccination

 

View more CBC comments visualizations here.

 

Python code to create the NetworkX charts is provided below and in Github repository.

import networkx as nx
import matplotlib.pyplot as plt
import math

# Drop comments without any replies
df.dropna(subset=['replied_to_user'], how='all', inplace=True)

# Build NetworkX graph
G = nx.Graph()

# Select data to use in graph from dataframe with full data
G = nx.from_pandas_edgelist(df, 'comment_user', 'replied_to_user', 'minutes')

# Create node size variable
d = nx.degree(G)

# create edges, and weights list for edge colors
# weights are minutes from first comment 
edges, weights = zip(*nx.get_edge_attributes(G,'minutes').items())

# create variable to increase graph figure size based on number of nodes to make more readable
factor = math.sqrt(len(G.nodes()) * 0.01)

# Create plot
plt_width = 25 * factor
plt_height = 25 * factor
fig, ax = plt.subplots(figsize=(plt_width, plt_height))
fig.set_facecolor('black')
ax.set_facecolor('black')

# create layout kamada_kawai_layout seemed best!
#pos = nx.spring_layout(G, k=.10, iterations=20)
#pos = nx.spring_layout(G)
pos = nx.kamada_kawai_layout(G)
#pos = nx.fruchterman_reingold_layout(G)

# draw edges
nx.draw_networkx_edges(
    G, 
    pos,
    arrows=True,
    arrowsize=20,
    edgelist=edges,
    edge_color=weights,
    width=1.0,
    edge_cmap=plt.cm.spring,
    node_size=[(d[node]+1) * 200 for node in G.nodes()], # tells edge to go join node on border
)

# draw nodes
nx.draw_networkx_nodes(
    G, 
    pos,
    node_color='red',
    alpha = 0.7,
    edgecolors='white', #color of node border
    node_size=[(d[node]+1) * 200 for node in G.nodes()],
)

# draw labels
nx.draw_networkx_labels(
    G, 
    pos, 
    labels=None, 
    font_size=10, 
    font_color='white', 
    font_family='sans-serif', 
    font_weight='normal', 
    alpha=None, 
    bbox=None, 
    horizontalalignment='center', 
    verticalalignment='center', 
    ax=None, 
    clip_on=False
)

# create variables to use in chart title
min_comment_time = df['comment_time'].min()[:-3]
max_comment_time = df['comment_time'].max()[:-3]
comment_count = len(df)

# create chart title text
title_text = file_url + '\n' + str(comment_count) + ' comments & replies '+ 'from: ' + min_comment_time + ' to: ' + max_comment_time

# add chart title
plt.title(title_text, fontsize=26 * factor, color='white')

#  save the image in the img folder:
plt.savefig(file_path_image + 'network_' + file_name + '.png', format="PNG")






 

CBC news comments data scraping and word cloud visualization

The CBC (Canadian Broadcasting Corporation) news website articles often have a comments section. It would be interesting to see the interactions between comments and replies, and to understand which person makes the most comments, and frequently used words and phrases.

See the results: https://sitrucp.github.io/cbc_comments/image_grid.html

The comments section is at the end of the story.

Unfortunately, the comment delivery method makes it very difficult to read all of the comments because it uses the “endless scrolling” format.

This requires clicking a “SHOW MORE” button at the bottom of the comments again and again to show more comments.

In addition, longer comments require clicking a “» more” link to reveal hidden text

and comments with multiple replies requires clicking a “SHOW 2 OLDER REPLIES”  to show more replies.

In order to see all of the comments and their complete text we would need a process that would effectively click through all of the buttons above until all of the comments and their content was displayed on the webpage.

Once all of the content was visible on the webpage it could be saved locally and Python BeautifulSoup could be used to extract all comments and their content and save it in a tabular data format.

Using Chrome browser’s  “Inspect”, “View pge source” (Ctrl-U) and “Developer tools” (Ctrl-Shift-i ) quickly revealed the relevant HTML tags behind the buttons identified above. These are the things that need to be “clicked” again and again until all the comments and their content are displayed on the webpage.

Relevant code is provided below and can be found in this Github repository.

View complete list of CBC comments visualizations here.

// SHOW MORE COMMENTS
// div tag will have style="display: none;" if there are no more comments otherwise it is displayed
<div class="vf-load-more-con" style="display: none;">
<a href="#" class="vf-load-more vf-text-small vf-strong">Show More</a>
</div>

// SHOW REPLIES
// div tag will have style="display: none;" if there are no more comments otherwise it is displayed
<div class="vf-comment-replies hidden">
<a class="vf-replies-button vf-strong vf-text-small" href="#">Show <span class="vf-replies">0</span> older replies</a>
</div>

// SHOW MORE COMMENT TEXT
// tag is displayed only when comment has hidden text otherwise the tag is not present
<a href="#" class="vf-show-more" data-action="more">» more</a>

The button clicking was somewhat automated using the Javascript below executed in the Developer tools console. The process currently requires pasting the code into the console and manually executing it. Step 1 required some babysitting to ensure it runs to completion satisfactorily.

The workflow to show all comments and their content is as follows:

    • Step 1: Run “STEP 1 – Show more comments” javascript in browser console.
    • Step 2: Run “STEP 2 – Show replies” javascript in browser console.
    • Step 3: Run “STEP 3 – Show more comment text” javascript in browser console.

At this point, all the comments and their content are displayed on the webpage.

    • Step 4: Save webpage locally.
    • Step 5: Run Python script to scape local webpage and save data as csv file.
    • Step 6: Open csv in Excel or analyse using your favourite data visualization tool.
//STEP 1 - Show more comments - pages with 1000's of comments gets slower and show button exceeds 5000 ms so requires manual rerun of script

var timer = setInterval(getMore, 5000);
function getMore() {
    moreDiv = document.getElementsByClassName('vf-load-more-con')[0];
    if(moreDiv.style.display === "none") {
        console.log('vf-load-more comments finished');
        clearInterval(timer);
        return;
    }
    console.log('More comments');
    moreDiv.childNodes[0].nextElementSibling.click();
}

//STEP 2 - Show replies - loops to auto show all comments' replies

var buttons = document.getElementsByClassName('vf-replies-button');
console.log(buttons.length, 'vf-replies-button')
for(var i = 0; i <= buttons.length; i++) { 
    buttons[i].click(); 
    console.log('click', i ,'of', buttons.length) 
}
console.log('vf-rreplies-button finished');

//STEP 3 - Show more comment text - loops to show all commments' text

var buttons = document.getElementsByClassName('vf-show-more');
console.log(buttons.length, 'vf-show-more buttons')
for(var i = 0; i <= buttons.length; i++) { 
    buttons[i].click(); 
    console.log('click', i, 'of',buttons.length) 
}
console.log('vf-show-more comment text finished');

Once all the comments and their content are displayed on the webpage, Step 4 is to save the webpage locally. You need to save as complete html page to save the javascript otherwise the page will be blank.

Then Step 5 is to run the following Python code to extract comment data into csv file.

This uses Python BeautifulSoup to extract HTML tag data into a Pandas dataframe which is then saved locally as a csv file.

import sys, os
import csv
import re
from datetime import datetime, timedelta
from bs4 import BeautifulSoup 
import pandas as pd

file_path_html = 'C:/cbc_comments/html/'
file_path_csv = 'C:/cbc_comments/data/'

file_url = 'https://www.cbc.ca/news/politics/trudeau-carbon-tax-global-1.6233936'

file_name = file_url.replace('https://www.cbc.ca/news/','').replace('/','_') + '.html'

soup = BeautifulSoup(open(file_path_html + file_name, encoding='utf8').read(), 'html.parser')

publish_date_raw = soup.find('time', class_='timeStamp')['datetime'][:-5]
publish_date = datetime.strptime(str(publish_date_raw), '%Y-%m-%dT%H:%M:%S')
vf_comments = soup.find('div', class_='vf-comments')
vf_comment_threads = soup.find_all('div', class_='vf-comment-container')
vf_usernames = soup.find_all('button', class_='vf-username')

# create comment data list of lists
comment_data = []
replies = []

for thread in vf_comment_threads:
    # children = data_ids.findChildren()
    # div_data_id = soup.find('div', class_='vf-comment')
    data_id = thread['data-id']
    username = thread.find('button', class_='vf-username').get_text()
    comment_time_str = thread.find('span', class_='vf-date').get_text().replace('s ago', '')
    comment_time_int = int(re.sub('[^0-9]', '', comment_time_str))
    if 'minute' in comment_time_str:
        elapsed_minutes = comment_time_int
    if 'hour' in comment_time_str:
        elapsed_minutes = comment_time_int * 60
    comment_text_raw = thread.find('span', class_='vf-comment-html-content').get_text()
    comment_time = publish_date - timedelta(minutes=elapsed_minutes)
    if 'Reply to @' in comment_text_raw:
        comment_type = 'reply'
        replied_to_user = comment_text_raw.split(": ",1)[0].replace('Reply to @', '').strip()
        try:
            comment_text = comment_text_raw.split(": ",1)[1].strip()
        except:
            comment_text = 'no text'
    else:
        comment_type = 'parent'
        replied_to_user = ''
        comment_text = comment_text_raw.strip()

    comment_data.append((
        data_id, 
        publish_date, 
        comment_time,
        username, 
        comment_type, 
        replied_to_user, 
        comment_text, 
        file_name.replace('.html', ''), 
        file_url))

df_comment_data = pd.DataFrame(
    list(comment_data), 
    columns=[
    'data_id', 
    'publish time', 
    'comment_time', 
    'comment_user', 
    'comment_type', 
    'replied_to_user', 
    'comment_text', 
    'file_name',
    'file_url'])

df_comment_data.to_csv(
    file_path_csv + file_name.replace('.html', '.csv'), 
    encoding='utf-8', 
    index=False)

Now that you have a nice tabular format csv data file you can do Step 6 and open the csv in Excel/Google Sheets or analyse the data using your favourite data visualization tool.

Comments Word Cloud

One of the visualizations I created was a comment word cloud. This used the csv file that was created above as a data source.

The Python NLTK  (Natural Language Toolkit) was used to remove stop words and punctuation, tokenize the comment text, and Python WordCloud was used to create the word cloud chart.

import csv
import string
from string import punctuation
import pandas as pd
import matplotlib.pyplot as plt
from wordcloud import WordCloud
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize, sent_tokenize
lemma = nltk.wordnet.WordNetLemmatizer()

# get paths and files
file_path_html = 'C:/cbc_comments/html/'
file_path_csv = 'C:/cbc_comments/data/'
file_path_image = 'C:/cbc_comments/image/'
file_url = 'https://www.cbc.ca/news/politics/trudeau-carbon-tax-global-1.6233936'
file_name = file_url.replace('https://www.cbc.ca/news/','').replace('/','_')

# read csv into df
df = pd.read_csv(file_path_csv + file_name + '.csv')

# Drop null comment text df records
df.dropna(subset=['comment_text'], how='all', inplace=True)

# Combine comment_text into list of comments
text_list = df['comment_text'].tolist()

# Combine all comment text into one huge text
text = ' '.join(comment.lower() for comment in df.comment_text)

# clean up comment text data
stop_words = set(stopwords.words('english'))
punctuation = list(punctuation)

tokens = word_tokenize(text)
filtered_text1 = [token for token in tokens if token not in stop_words]
filtered_text2 = [idx for idx in filtered_text1 if not any(punc in idx for punc in string.punctuation)]
filtered_text3 = [item for item in filtered_text2 if len(item)>1]
filtered_text4 = [x for x in filtered_text3 if not isinstance(x, int)]
filtered_text = [lemma.lemmatize(x) for x in filtered_text4]

# Create wordcloud
wordcloud = WordCloud(
    width=800, 
    height=450,
    font_path='c:/windows/font/ARLRDBD.TTF',
    colormap='rainbow',
    max_font_size=80,
    collocations=False, 
    max_words=1000, 
    background_color='black'
    ).generate(' '.join(filtered_text))

plt.figure(
    figsize=(20,10), 
    facecolor='k'
    )
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis('off')
plt.tight_layout(pad=0)
plt.show()

# Save the image in the img folder:
wordcloud.to_file(file_path_image + file_name + '.png')

The word cloud for this story’s comments looks like this.

Scraping public health web page using Python, BeautifulSoup4, Requests and Pandas

During the 2020 COVID-19 pandemic in Canada I wanted to get COVID-19 confirmed cases counts data for the city of Montreal.

The data I wanted was made freely available by the Quebec Government’s Health Montreal website in a tabular format that was updated regularly.  EDIT: Health Montreal stopped publishing data so the link no longer works!

I wanted to be able to use this data for this Leaflet choropleth map visualization. If interested, read more details on this visualization in another blog post.

There are many ways to get data from web pages. First I did it manually by copy and paste into Excel. This is ok for a one time analysis. You can even use Excel Power Query web feature to automate this a bit more. However, if you want to fully automate getting data from a web page you should use web scraping techniques.

Note that the code described below is available in this Github repository https://github.com/sitrucp/covid_montreal_scrape_data.

Initial data retrieval and transformation done using Excel
To get the web page data, at first, I simply manually copied and pasted into an Excel workbook. This was quite easy to do as the tabular format copies and pastes nicely into an Excel grid.

To automate this a bit more and do some more complex data transformations I switched to using Excel Power Query’s web query feature and Power Query to transform the data for the choropleth map visualization.

Full automation and scheduling using Python, cron job and AWS S3
However, this was intended to be an ongoing analysis so it needed to fully automated and the data retrieval and transformation process to be run on a scheduled basis.

In addition to scraping the data from the web page, the data had to be made available somewhere on the internet where the choropleth map visualization could freely access it by a url.

As the choropleth map visualization is hosted on Github.io I could have used Git on the web server to do an automated, scheduled push of new data from web server to the Github repository. I decided to give this a pass and try it some other time.

Instead, I choose to upload the data to public AWS S3 bucket that the choropleth map visualization could access with a simple url for each data file.

Everything from scraping the website to uploading data to AWS S3 was done in Python. The Python code is run on scheduled basis using a cron job on a web server. The cron job runs a few times each evening when the data is usually updated on the website.

Python, BeautifulSoup4, Requests and Pandas to retrieve and transform the web page data and create a JSON file that could be uploaded to AWS S3 bucket and made available to the choropleth map visualization.

Python module Boto was used to upload the data from web server to an AWS S3 bucket.

Let go through the code.

BeautifulSoup4 was used to get web page and find the specific table that holds the tabular data as below. The table with the counts by neighbourhood was the 4th table in the web page:

# get health montreal webpage html
 url = 'https://santemontreal.qc.ca/en/public/coronavirus-covid-19/'
 page = requests.get(url)
 soup = BeautifulSoup(page.content, 'html.parser')
 # get all tables on webpage
 tables = soup.find_all('table')
 # select 4th table in list of tables on webpage
 table = tables[3]

Then Pandas is used to read that table into a dataframe and then add more readable consistent column headers as below:

  # read table into pandas dataframe
    df_table_data_all_cols = pd.read_html(str(table))[0]
    # rename columns 
    df_table_data_all_cols.columns = ['region_name', 'case_count','case_percent','case_per_100k','mort_count', 'mort_per_100k']
    df_table_data = df_table_data_all_cols[['region_name','case_count','case_percent','case_per_100k','mort_count', 'mort_per_100k']]

The web page table dataframe was merged with the “lookup” dataframe. This merging is basically equivalent to a SQL JOIN::

   # join lookup table to scrape data to get geojson_name field to use on map
    df_table_data_w_lookup = pd.merge(df_montreal_regions_lookup, df_table_data, left_on='website_name', right_on='region_name', how='left')
    df_table_data_final = df_table_data_w_lookup[['website_name', 'region_name', 'geojson_name', 'case_count','case_percent','case_per_100k','mort_count', 'mort_per_100k']]

The lookup table has one row per Montreal neighbourhood with 2 columns: one for the Health Montreal website neighbourhood name and a second for the Leaflet map’s geoJSON geographical region boundary names. This is required because the Health Montreal website neighbourhood names were not identical to the map’s geographical region boundary names.

Of course, I could have modified the map’s geographical region boundary names to reflect Health Canada naming convention but creating a “lookup” table was easier and provided flexibility in case Health Montreal table’s names changed (which they did, in fact several times!).

The Python code does a check to see if current data on the web page is same as previously retrieved. I only wanted to upload new data to AWS S3 if it was necessary. This check is done by comparing the total case count on web page to previous case count:

   # if new is diff from prev, update files and upload to aws
    if str_total_case_prev == str_total_case_new:
        scrape_result = 'no change, case total is still same as prev case total: ' + str_total_case_prev
    else:
        # create scrape result string to print to cron log
        scrape_result = 'new cases found: ' + str_total_case_new + ' prev case total: ' + str_total_case_prev
        # transform pandas dataframe into dictionary to write as json
        json_table = df_table_data_final.to_dict('records')
        # write new montreal covid_data to json file for map to use
        with open('uploads/montreal_covid_data.json', 'w') as f:
            f.write('var covid_data = \n')
            json.dump(json_table, f, ensure_ascii=True)
            # write today's date to use in index page as last updated date
        with open('uploads/last_update_date.json', 'w') as f:
            f.write('var last_update_date = \n')
            json.dump(todays_date, f)
        upload_to_aws()

If the counts are the same then code stops. If the new count is different than the previous count the code create new data that is uploaded to the AWS S3 bucket.

A scrape_results string is also created that is written to the cron log.

Uploading to an AWS S3 bucket is conceptually quite straightforward. The Python module Boto makes it easy to create the connection and bucket definitions:

  ## create aws S3 connection
    conn = S3Connection(canada_covid_aws_keys['AWS_KEY'], canada_covid_aws_keys['AWS_SECRET'])
    bucket = conn.get_bucket('canada-covid-data')

The bucket itself has a Public Policy so anyone can read the data files. Each bucket file has a public url so the map visualization can simply reference these to get the data.

However, authentication is required in order transfer the data from the web server to the S3 bucket. So there is some behind the scenes setup work to do on the AWS side. First, to create and configure the bucket and second to create and configure the IAM objects to do authentication.

An IAM User Policy was created to allow that User to write, read and delete on that bucket. The User has an AWS key and secret that are provided as part of the Boto connection to do the S3 authentication. Of course the key and secret should not be exposed and are imported into the Python code from another non-public location on the web server.

Once connection is made, then the Python code deletes the existing files on S3 before uploading new files:

  
    # identify files to be uploaded to aws
    upload_files = [f for f in listdir(upload_path) if isfile(join(upload_path, f))]

    # delete existing files from bucket
    for key in bucket.list():
        bucket.delete_key(key)

    # write new files to bucket 
    for file in upload_files:
        k = Key(bucket)
        k.key = file
        k.set_contents_from_filename(upload_path + file)

The Leaflet map visualization will then show the new updated data next time it is viewed or browser page refreshed.

Dell ecommerce web site scraping analysis

Once upon a time, I needed to find Dell monitor data to analyse.

A quick search brought me to their eCommerce web site which had all the monitor data I needed and all I had to do was get the data out of the website.

To get the data from the website I used the Python and Python module Scrapy to scrape the webpage and write data to a csv file.

Based on the data I got from the site the counts of monitors by size and country are presented below.


 

However this data is probably not accurate. In fact I know it isn’t. There was a surprising number of variances in the monitor descriptions including screen size which made it hard to get quick accurate counts. I had to do some data munging to clean up the data but there is still a bit more to do.

The surprising thing is that there do not appear to be specific data points for each of the monitor descriptions components. This website is being generated from a data source likely a database that contains Dell’s products. This database does not appear to have fields for each independent data point that are used to categorize and describe Dell monitors.

The reason I say this is that the monitor descriptions single string of text. Within the text string are things like the monitor size, model, common name, and various other features.

These are not in same order, do not all have same spelling, format such as use of text separators, lower or upper case.

Most descriptions are formatted like this example:

Dell UltraSharp 24 InfinityEdge Monitor – U2417H”.

However the many variations on this format at listed below. There is obviously no standardization for Dell to enter monitor descriptions for their ecommerce site.

  • Monitor Dell S2240T serie S 21.5″
  • Dell P2214H – Monitor LED – 22-pulgadas – 1920 x 1080 – 250 cd/m2 – 1000:1 – 8 ms – DVI-D
  • Dell 22 Monitor | P2213 56cm(22′) Black No Stand
  • Monitor Dell UltraSharp de 25″ | Monitor UP2516D
  • Dell Ultrasharp 25 Monitor – UP2516D with PremierColor
  • Dell 22 Monitor – S2216M
  • Monitor Dell UltraSharp 24: U2415
  • Dell S2340M 23 Inch LED monitor – Widescreen 60Hz Full HD Monitor

Some descriptions include the monitor size unit of measurement, usually in inches, sometimes in centimeters, and sometimes none at all.

Sometimes hyphens are used to separate description sections but other times the pipe character ( | ) is used to separate content. Its a real mish mash.

Description do not have consistent order of description components. Sometimes part number is after monitor size, sometimes it is elsewhere.

The problem with this is that customers browsing the site will have to work harder to compare monitors taking into account these variances.

I’d bet this leads to lost sales or poorly chosen sales that result in refunds or disappointed customers.

I’d also bet that Dell enterprise customers and resellers also have a hard time parsing these monitor descriptions too.

This did affect my ability to easily get the data to do analysis of monitors by description categories because they were not in predictable locations and were presented in many different formats.

Another unusual finding was that it looks like Dell has designated default set of 7 monitors to a large number of two digit country codes. For example Bhutan (bt) and Bolivia (rb) both have the same 7 records, as do many others. Take look at the count of records per country at bottom of page. Many countries have only 7 monitors.

Here is the step by step process used to scrape this data.

The screenshot below shows the ecommerce web site page structure. The monitor information is presented on the page in a set of nested HTML tags which contain the monitor data.

dell ecommerce screenshot

These nested HTML tags can be scraped relatively easily. A quick review revealed that the web pages contained identifiable HTML tags that held the data I needed. Those tags are named in Python code below.

The website’s url also had consistent structure so I could automate navigating through paged results as well as navigate through multiple countries to get monitor data for more than one Dell country in the same sessions.

Below is an example of the url for the Dell Canada eCommerce web site’s page 1:

https://accessories.dell.com/sna/category.aspx?c=ca&category_id=6481&l=en&s=dhs&ref=3245_mh&cs=cadhs1&~ck=anav&p=1

The only two variables in url that change for the crawling purposes are:

  • The “c” variable was a 2 character country code eg “ca” = Canada, “sg” = Singapore, “my” = Malaysia, etc.
  • The “p” variable was a number representing the count of web pages that a country’s monitors are shown on about 10 monitors per page. No country I looked at had more than 5 pages of monitors.

Dell is a multi-national corporation so likely has many countries in this eCommerce database.

Rather than guess what they are I got a list of two character country codes from Wikipedia that I could use to create urls to see if that country has data. As a bonus the Wikipedia list gives me the country name.

The Wikipedia country code list needs a bit of clean-up. Some entries are clearly not countries but some type of administrative designation. Some countries are listed twice with two country codes. For example Argentina has “ar” and “ra”. For practical purposes if the Dell url can’t be created from this country codes in this list then the code just skips to next one country code.

The Python code I used is shown below. It outputs a csv file with the website data for each country with the following columns:

  • date (of scraping)
  • country_code (country code entered from Wikipedia)
  • country (country name from Wikipedia)
  • page (page number of website results)
  • desc (HTML tag containing string of text)
  • prod_name (parsed from desc)
  • size (parsed from desc)
  • model (parsed from desc)
  • delivery (HTML tag containing just this string)
  • price (HTML tag containing just this string)
  • url (url generated from country code and page)

The code loops through the list of countries that I got from Wikipedia and within each country it also loops through the pages of results while pagenum < 6:.

I hard coded the number of page loops to 6 as no country had more than 5 pages of results. I could have used other methods perhaps looping until url returned 404 or page not found. It was easier to hard code based on manual observation.

Dell eCommerce website scraping Python code

#-*- coding: utf-8 -*-
import urllib2
import urllib
from cookielib import CookieJar

from bs4 import BeautifulSoup
import csv
import re
from datetime import datetime

countries={
    'AC':'Ascension Island',
    'AD':'Andorra',
    'AE':'United Arab Emirates',
     ... etc
    'ZM':'Zambia',
    'ZR':'Zaire',
    'ZW':'Zimbabwe'
}

def main():

    output = list()
    todaydate = datetime.today().strftime('%Y-%m-%d')
    
    with open('dell_monitors.csv', 'wb') as file:
        writer = csv.DictWriter(file, fieldnames = ['date', 'country_code', 'country', 'page', 'desc', 'prod_name', 'size', 'model', 'delivery', 'price', 'url'], delimiter = ',')
        writer.writeheader()
        
        for key in sorted(countries):
            country_code = key.lower()
            country = countries[key]
            pagenum = 1      
            while pagenum < 6:
                url = "https://accessories.dell.com/sna/category.aspx?c="+country_code+"&category_id=6481&l=en&s=dhs&ref=3245_mh&cs=cadhs1&~ck=anav&p=" + str(pagenum)
                #HTTPCookieProcessor allows cookies to be accepted and avoid timeout waiting for prompt
                page = urllib2.build_opener(urllib2.HTTPCookieProcessor).open(url).read()
                soup = BeautifulSoup(page)           
                if soup.find("div", {"class":"rgParentH"}):
                    tablediv = soup.find("div", {"class":"rgParentH"})
                    tables = tablediv.find_all('table')
                    data_table = tables[0] # outermost table parent =0 or no parent
                    rows = data_table.find_all("tr")
                    
                    for row in rows:
                        rgDescription = row.find("div", {"class":"rgDescription"})
                        rgMiscInfo = row.find("div", {"class":"rgMiscInfo"})
                        pricing_retail_nodiscount_price = row.find("span", {"class":"pricing_retail_nodiscount_price"})

                        if rgMiscInfo: 
                            delivery = rgMiscInfo.get_text().encode('utf-8')
                        else:
                            delivery = ''
                            
                        if pricing_retail_nodiscount_price:
                            price = pricing_retail_nodiscount_price.get_text().encode('utf-8').replace(',','')
                        else:
                            price = ''
                            
                        if rgDescription:
                            desc = rgDescription.get_text().encode('utf-8')
                            prod_name = desc.split("-")[0].strip()
                            try:
                                size1 = [int(s) for s in prod_name.split() if s.isdigit()]
                                size = str(size1[0])
                            except:
                                size = 'unknown'
                            try:
                                model = desc.split("-")[1].strip()
                            except:
                                model = desc
                                
                            results = str(todaydate)+","+country_code+","+country+","+str(pagenum)+","+desc+","+prod_name+","+size+","+model+","+delivery+","+price+","+url
                            
                            file.write(results + '\n')
                    
                    pagenum +=1
                else:
                    #skip to next country
                    pagenum = 6 
                    continue

                
if __name__ == '__main__':
    main()


The Python code scraping output is attached here as a csv file.

The summary is a list of the scraping output that shows a list of country codes, countries and count of Dell monitor records scraped from a web page using the country code Wikipedia had for these countries.

af – Afghanistan – 7 records
ax – Aland – 7 records
as – American Samoa – 7 records
ad – Andorra – 7 records
aq – Antarctica – 7 records
ar – Argentina – 12 records
ra – Argentina – 7 records
ac – Ascension Island – 7 records
au – Australia – 36 records
at – Austria – 6 records
bd – Bangladesh – 7 records
be – Belgium – 6 records
bx – Benelux Trademarks and Design Offices – 7 records
dy – Benin – 7 records
bt – Bhutan – 7 records
rb – Bolivia – 7 records
bv – Bouvet Island – 7 records
br – Brazil – 37 records
io – British Indian Ocean Territory – 7 records
bn – Brunei Darussalam – 7 records
bu – Burma – 7 records
kh – Cambodia – 7 records
ca – Canada – 46 records
ic – Canary Islands – 7 records
ct – Canton and Enderbury Islands – 7 records
cl – Chile – 44 records
cn – China – 46 records
rc – China – 7 records
cx – Christmas Island – 7 records
cp – Clipperton Island – 7 records
cc – Cocos (Keeling) Islands – 7 records
co – Colombia – 44 records
ck – Cook Islands – 7 records
cu – Cuba – 7 records
cw – Curacao – 7 records
cz – Czech Republic – 6 records
dk – Denmark – 23 records
dg – Diego Garcia – 7 records
nq – Dronning Maud Land – 7 records
tp – East Timor – 7 records
er – Eritrea – 7 records
ew – Estonia – 7 records
fk – Falkland Islands (Malvinas) – 7 records
fj – Fiji – 7 records
sf – Finland – 7 records
fi – Finland – 5 records
fr – France – 17 records
fx – Korea – 7 records
dd – German Democratic Republic – 7 records
de – Germany – 17 records
gi – Gibraltar – 7 records
gr – Greece – 5 records
gl – Greenland – 7 records
wg – Grenada – 7 records
gu – Guam – 7 records
gw – Guinea-Bissau – 7 records
rh – Haiti – 7 records
hm – Heard Island and McDonald Islands – 7 records
va – Holy See – 7 records
hk – Hong Kong – 47 records
in – India – 10 records
ri – Indonesia – 7 records
ir – Iran – 7 records
ie – Ireland – 7 records
im – Isle of Man – 7 records
it – Italy – 1 records
ja – Jamaica – 7 records
jp – Japan – 49 records
je – Jersey – 7 records
jt – Johnston Island – 7 records
ki – Kiribati – 7 records
kr – Korea – 34 records
kp – Korea – 7 records
rl – Lebanon – 7 records
lf – Libya Fezzan – 7 records
li – Liechtenstein – 7 records
fl – Liechtenstein – 7 records
mo – Macao – 7 records
rm – Madagascar – 7 records
my – Malaysia – 25 records
mv – Maldives – 7 records
mh – Marshall Islands – 7 records
mx – Mexico – 44 records
fm – Micronesia – 7 records
mi – Midway Islands – 7 records
mc – Monaco – 7 records
mn – Mongolia – 7 records
mm – Myanmar – 7 records
nr – Nauru – 7 records
np – Nepal – 7 records
nl – Netherlands – 8 records
nt – Neutral Zone – 7 records
nh – New Hebrides – 7 records
nz – New Zealand – 37 records
rn – Niger – 7 records
nu – Niue – 7 records
nf – Norfolk Island – 7 records
mp – Northern Mariana Islands – 7 records
no – Norway – 19 records
pc – Pacific Islands – 7 records
pw – Palau – 6 records
ps – Palestine – 7 records
pg – Papua New Guinea – 7 records
pe – Peru – 43 records
rp – Philippines – 7 records
pi – Philippines – 7 records
pn – Pitcairn – 7 records
pl – Poland – 4 records
pt – Portugal – 7 records
bl – Saint Barthelemy – 7 records
sh – Saint Helena – 7 records
wl – Saint Lucia – 7 records
mf – Saint Martin (French part) – 7 records
pm – Saint Pierre and Miquelon – 7 records
wv – Saint Vincent – 7 records
ws – Samoa – 7 records
sm – San Marino – 7 records
st – Sao Tome and Principe – 7 records
sg – Singapore – 37 records
sk – Slovakia – 23 records
sb – Solomon Islands – 7 records
gs – South Georgia and the South Sandwich Islands – 7 records
ss – South Sudan – 7 records
es – Spain – 10 records
lk – Sri Lanka – 7 records
sd – Sudan – 7 records
sj – Svalbard and Jan Mayen – 7 records
se – Sweden – 6 records
ch – Switzerland – 21 records
sy – Syrian Arab Republic – 7 records
tw – Taiwan – 43 records
th – Thailand – 40 records
tl – Timor-Leste – 7 records
tk – Tokelau – 7 records
to – Tonga – 7 records
ta – Tristan da Cunha – 7 records
tv – Tuvalu – 7 records
uk – United Kingdom – 35 records
un – United Nations – 7 records
us – United States of America – 7 records
hv – Upper Volta – 7 records
su – USSR – 7 records
vu – Vanuatu – 7 records
yv – Venezuela – 7 records
vd – Viet-Nam – 7 records
wk – Wake Island – 7 records
wf – Wallis and Futuna – 7 records
eh – Western Sahara – 7 records
yd – Yemen – 7 records
zr – Zaire – 7 records

Grand Total – 1760 records