Back of the Envelope Series on NFL Stadium Deals

 

With three NFL teams vying to relocate to Los Angeles, much of the press coverage has been devoid of financial analysis.  This series will use ten years worth of NFL valuation data from Forbes magazine to try and understand the financial implications for both the teams and the communities.    In the past 10 years, the following NFL teams have built new stadiums:

  • Arizona Cardinals
  • Indianapolis Colts
  • Dallas Cowboys
  • Minnesota Vikings
  • New York Giants
  • New York Jets
  • Kansas City Chiefs
  • New Orleans Saints
  • San Francisco 49ers

Continue reading

Advertisements

An XBRL Library for Perl

I’ve been working for some time on a Perl module to parse XBRL, a complex XML based format for reporting financial information.  The US SEC requires   publicly traded firms to provide their financial reports in XBRL.  The goal of the Perl module is to provide a clear and easy to use interface to extract data from an XBRL instance and use it for another purpose.  In the initial release, the module features a function to render the XBRL instance into a very basic HTML document.   Because the XBRL standard is large and complex, support for its features will be added over subsequent releases.

Source code and project management is hosted at Github.  The module is available via CPAN here.

An Open Source WebOS

Today’s announcement from HP that WebOS will be set free as an Open Source project opens up room for some interesting changes in the mobile landscape.  Both CNET’s Stephen Shankland  and The VAR Guy think nothing much will come of this move.  As both a long time Open Source zealot and a mobile developer, I think there is more there there than they do.

Continue reading

More Statistics on Sacramento Housing

In a previous post, I showed a pretty simple regression analysis of housing prices and house size for my Zip code. The zip code was used as an easy way to include location in the output. Using PostGIS and geographic data from the City of Sacramento, this post will show a regression analysis ( using the R statistical programming project) using the city’s designated neighborhoods. The raw data real estate data comes from the Sacramento Bee. After describing the model, I’ll apply it the last few months of home sales (not used in developing the model), and see how well it does at predicting results.    Continue reading

D’oh! Not Again

Having worked for many years at HP, the recent announcement cancelling HP’s Webos program is rich with personal irony. Until yesterday’s announcement, I was working on a bar code scanning application for HP’s Pre 3 phone.

Thinking about life as an independent “app” developer, I’m reminded of Canadian Prime Minister Pierre Trudeau’s famous remark about the United States:

Living next to you is in some ways like sleeping with an elephant. No matter how friendly and even-tempered is the beast, if I can call it that, one is affected by every twitch and grunt.

I hope the new PC company gets off to a good start and they get to keep WebOS, but they will have to move forward without my efforts as an “app” developer.

Using Statistics to Explicate the Sacramento Housing Market

The Sacramento Bee has a home sale database for looking up the prices of home sales.  I was a little disappointed that it only listed the sales and didn’t do any Zwillow style statistical analysis.  So I dumped the data into R and started playing around with it.  Home price versus size versus number of bedrooms is always one of the text book examples for multiple regression, so I thought it would be pretty easy.  It turns out that the replicating the Zwillow estimate with real world data, is harder than it looks.  So, I’m going to start with smallest model I can get decent results with, and add build it up over a series of posts.  Below is a regression for Price and Size in the 95834 zip code.  Continue reading

A Real World Example of A/B Website Testing

A/B testing has gained a lot of interest in recent years as a practical method for improving the results from websites.  My business partner and I produced a mobile application, RecallCheck that relied on a database we created from FDA and USDA websites.  During our project, the FDA introduced a new website for reporting food related issues called the:  Reportable Food Registry.  For our purposes (using a mobile phones to scan bar codes), the result we most cared about was the quality and quantity of UPC codes.  Below is a detailed statistical analysis of before and after the FDA made it’s changes with regard to the quantity of UPC codes included in published recall notices.   While our interest was the UPC codes, the same process can be applied to any change on a website. Continue reading

RecallCheck Post-Mortem

Recently, I’ve been answering a lot of questions about about RecallCheck, which was an Android based mobile application to allow uses to scan UPC bar codes on packaged food and find out about any recent recalls involving that product.  Developing and marketing RecallCheck, I worked with my business partner Scott. With the project is now over, I’d like to cover some of the learnings and reasons I believe we were ultimately unsuccessful. Continue reading

Customer Lifetime Value and Technology Marketing

Fred Wilson over at AVC has stirred up something of a hoo haa about the value of marketing to a technology startup.  I think Fred has point about the poor contributions marketing has made to technology startups.  My experience in business and school is that marketing tends to attract bs artists who talk a fast game, but aren’t interested in doing any hard analytical work.    For a big company, this is often not a problem.  But for a little one it can be quite dangerous.  With this post I’m going to describe using Customer Lifetime Value (CLV) as a useful tool for measuring marketing effort.  Many startup technology firms have ignored this useful tool at their own peril. Continue reading