Deploying a car price model using R and AzureML

Recently Microsoft released the AzureML R package, it allows R users to publish their R models (or any R function) as a web service on the Microsoft Azure Machine Learning platform. Of course, I wanted to test the new package, so I performed the following steps.

Sign up for Azure Machine Learning

To make use of this service you will need to sign up for an account. There is a free tier account, go to the Azure machine learning website.

Get the AzureML package

The azureml package can be installed from GitHub, follow the instructions here.

Create a model in R to deploy

In a previous blog post I described how to estimate the value of a car. I scraped the Dutch site www.autotrader.nl where used cars are sold. For more than 100.000 used cars I have their make, model, sales price, mileage (kilometers driven) and age. Let’s focus again on Renault, and instead of only using mileage I am also going to use the age, and the car model in my predictive splines model. For those who are interested the Renault data (4MB, Rda format, with many more variables) is available here.


library(splines)
RenaultPriceModel = lm(Price ~ ns(Kilometers, df=5)*model + ns(Age,df=3) , data = RenaultCars)

Adding interaction between Kilometers and car model increases the predictive power. Suppose I am satisfied with the model, it has a nice R-squared (0.92) on a hold-out set, now I am going to create a scoring function from the model object



RenaultScorer = function(Kilometers, Age, model)
{
  require(splines)
  return(
    predict.lm(
      RenaultPriceModel, 
      data.frame("Kilometers" = Kilometers, "Age" = Age, "model" = model)
   )
  )
}

# predict the price for a 1 year old Renault Espace that has driven 50000 KM
RenaultScorer(50000,365,"ESPACE")
33616.91 

To see how estimated car prices vary over the age and kilometers, I created some contour plots. It seems that Clios loose value over years while Espaces loose value over kilometers. Sell your five year old Espace with 100K kilometers for 20.000 Euro, so that you can buy a new Clio (0 years and 0 kilometers)..

blog_countours

Before you can publish R functions on Azure you need to obtain the security credentials to your Azure Machine Learning workspace. See the instructions here. The function, RenaultScorer, can now be published as a web service on the AzureML platform using the publishWebservice function.


library(AzureML)

myWsID = "123456789101112131415"
myAuth = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"

# publish the R function to AzureML
RenaultWebService = publishWebService(
  "RenaultScorer",
  "RenaultPricerOnline",
  list("Kilometers" = "float", "Age" = "float", "model" = "string"),
  list("prijs" = "float"),
  myWsID,
  myAuth
)

Test the web service

In R, the AzureML package contains the consumeList function to call the web service we have just created. The key and the location of the web service that we want to call are contained in the return object of the publishWebservice.


### predict the price of a one year old Renault Espace using the web service
RenaultPrice = consumeLists(
  RenaultWebService[[2]][[1]]["PrimaryKey"],
  RenaultWebService[[2]][[1]]$ApiLocation,
  list("Kilometers" = 50000, "Age" = 365, "model" = "ESPACE")
)
RenaultPrice

 prijs
1 33616.90625

In the Azure machine learning studio you can also test the web service.

AzureStudio

Azure ML studio

My web service (called RenaultPricerOnline) is visible now in Azure machine Learning studio, click on it and it will launch a dialog box to enter kilometers, age and model.

AzureStudio2

The call results again in a price of 33616.91. The call takes some time, but that’s because the performance is throttled for free tier accounts.

Conclusion

In R I have created a car price prediction model, but more likely than not, the predictive model will be consumed by another application than R, It is not difficult to publish an R model as a web service on the Azure machine learning platform. Once you have done that, AzureML will generate the api key and the link to the web service, so that you can provide that to anyone who wants to consume the predictive model in their applications.

Advertisements

Soap analytics: Text mining “Goede tijden slechte tijden” plot summaries….

Sorry for the local nature of this blog post. I was watching Dutch television and zapping between channels the other day and I stumbled upon “Goede Tijden Slechte Tijden” (GTST). This is a Dutch soap series broadcast by RTL Nederland. I must confess, I was watching (had to watch) this years ago because my wife was watching it…… My gut feeling with these daily soap series is that missing a few months or even years does not matter. Once you’ve seen some GTST episodes you’ve seen them all, the story line is always very similar. Can we use some data science to test if this gut feeling makes sense? I am using R and SAS to investigate this and see if more interesting soap analytics can be derived.

Scraping episode plot summaries

First, data is needed. All GTST episode plot summaries are available, from the very first episode in October 1990.blogpic01

A great R package to scrape data from web sites is rvest by Hadley Wickham, I can advise anyone to learn this. Luckily, the website structure of the GTST plot summaries is not that difficult. With the following R code I was able to extract all episodes. First I wrote a function that extracts the plot summaries of one specific month.

getGTSTdata = function(httploc){
  tryCatch({
    gtst = html(httploc) %>%
     html_nodes(".mainarticle_body") %>%
     html_children()

    # The dates are in bold, the plot summaries are normal text
    texts = names(gtst[[1]]) == "text"
    datumsel = names(gtst[[1]]) == "b"

    episodeplot = gtst[[1]][texts] %>%
     sapply(html_text) %>%
     str_replace_all(pattern='\n'," ")

    episodedate = gtst[[1]][datumsel] %>%
     sapply(html_text)

    # put data in a data frame and return as results
    return(data.frame(episodeplot, episodedate))
  },
    error = function(cond) {
      message(paste("URL does not seem to exist:", httploc))
      message("Here's the original error message:")
      message(cond)
      # Choose a return value in case of error
      return(NULL)
    }
  )
}

This function is then used inside a loop over all months to get all the episode summaries. Some months do not have episodes, and the corresponding link does not exist (actors are on summer holiday!). So I used the tryCatch function inside my function which will continue the loop if a link does not exist.

months = c("januari", "februari","maart","april","mei","juni","juli","augustus","september","oktober","november","december")
years = 1990:2012

GTST_Allplots = data.frame()

for (j in years){
  for(m in months){
    httploc = paste("http://www.rtl.nl/soaps/gtst/meerdijk/tvgids/", m, j, ".xml", sep="")
    out = getGTSTdata(httploc)

    if (!is.null(out)){
      out$datums = paste(out$datums, j)
      GTST_Allplots = rbind(GTST_Allplots,out)
    }
  }
}

A a result of the scraping, I got 4090 episode plot summaries. The real GTST fans know that there are more episodes, the more recent episode summaries (from 2013) are on a different site. For this analysis I did not bother to use them.

Text mining the plot summaries

The episode summaries are now imported in SAS Enterprise Guide, the following figure shows the first few rows of the data.

blogpic02

Click to enlarge

With SAS Text miner it is very easy and fast to analyze unstructured data, see my earlier blog post. What are the most important topics that can be found in all the episodes? Let’s generate them, I can use either the text topic node or the text cluster node in SAS. The difference is that with the topic node an episode can belong to more than one topic, while with the cluster node an episode belongs only to one cluster.

blogpic03I have selected 30 topics to be generated, you can experiment with different numbers of topics. The resulting topics can be found in the following figure.

blogpic04

GTST topics. click to enlarge

Looking at the different clusters found, it turns out that many topics are described by the names of the characters in the soap series. For example topic number 2 is described by terms like, “Anton“, “Bianca“, “Lucas“, ‘Maxime“, and “Sjoerd“, they occur often together in 418 episodes. And topic number 25 involves the terms “Ludo“, “Isabelle“, “Martine“, “Janine“. Here is a picture collage to show this in a more visual appealing way. I have only attached faces for six clusters, the other clusters are still the colored squares. The clusters that you see in the graph are based on the underlying distances between the clusters.

GTST_COLLAGE

click to enlarge

Zoom in on a topic

Now let’s say I am interested in the characters of topic 25 (the Ludo Isabelle story line). Can I discover sub-topics in this story line? So, I apply a filter on topic 25, only the episodes that belong to the Ludo Isabelle story line are selected and I generate a new set of topics (call them sub topics to distinguish them form the original topics) .

blogpic05

Text miner flow to investigate a specific topic

What are some of the subtopics of the Ludo Isabelle story line?

  • Getting money back
  • George, Annie Big panic, very worried
  • Writing farewell letter
  • Plan of Jack brings people in danger

As a data scientist I should now reach out or go back to the business and ask for validation. So I am reaching out now: Are there any real hardcore GTST fans out there that can explain:

  • why certain characters are grouped together?
  • why certain groups of characters are closer to each other than other groups?
  • recognize subtopics in Ludo Isabelle story lines?

Text profiling

To comeback to my gut feeling, can I see if the different story lines remain similar? I can use a the Text profile node in SAS Text miner to investigate this. The Text Profile node enables you to profile a target variable using terms found in the documents. For each level of a target variable, the node outputs a list of terms from the collection that characterize or describe that level. The approach uses a hierarchical Bayesian belief model to predict which terms are the most likely to describe the level. In order to avoid merely selecting the most common terms, prior probabilities are used to down-weight terms that are common in more than one level of the target variable.

The target variable can also be a date variable. In my scraped episode data I have the date of the episodes, there is a nice feature that allows the user to set the aggregation level. Let’s look at years as aggregation level.

blogpic06

Text profile node in SAS

The output consists of different tables and plots, two interesting plots are the Term Time series plot and the Target Similarity plot. The first one shows for a selected year the most important terms and how these terms evolve over the other years. Suppose we select 1991 then we get the following graph.

blogpic07

Click to enlarge. Terms over years

Myriam was an important term (character) in 1991, but we see that here role stopped in 1995. Jef was a slightly less important term, but his character continued for a very long time in the series. The similarity plot shows the similarities between the sets of episodes by the different years. The distances are calculated on the term beliefs of the chosen terms.

blogpic08

GTST similarities by year

We can see very strong similarities between years 2000 and 2001, two years that are also very similar are 1996 and 1997. Very isolated years are 2009 and 2012, maybe the writers tried a new story line that was unsuccessful and canceled it. Now let’s focus on one season of GTST, season 22 from September 2011 to May 2012, and look at similarities between months. They are shown in the following figure.

blogpic09

Season 22 of GTST: monthly similarities

It seems that the months September ’11, November ’11 and May ’12 are very similar but the rest of the months are quite separate.

Conclusion

The combination R / rvest and SAS Text miner is an ideal tool combination to quickly scrape (text) data from sites and easily get first insights into those texts. Applying this on Goede Tijden Slechte Tijden (GTST) episode summaries rejects my gut feeling that once you’ve seen some GTST episodes you’ve seen them all. It turns out that story lines between years, and story lines within one year can be quite dissimilar!

Without watching thousands of episodes I can quickly get an idea of which characters belong together, what the main topics are of a season, the rise and fall of characters in the series and how topics are changing over time. But keep in mind: If you are in a train and hear two nice lady’s talking about GTST, will this analysis be enough to have meaningful conversations with them?  …… I don’t think so!

Thanks @ErwinHuizenga and @josvandongen for reviewing.