By Franco Faraudo
Sometimes big ideas that have been circling in the atmosphere for a long time finally get codified and set in stone. These events are often wind up memorialized, as even though they are not always the origin of the idea they are an acceptable stand-in. The official creation of things like agreements, treaties, laws, declarations, commandments and commitments are often done under the pomp of circumstance, in dramatic locations fit for the occasion’s grandeur.
Sometimes, though, these events happen under the veil of the mundane. They happen at small, invite-only meetings in roadside conference centers in forgettable towns like Sebastopol, California (before you think I am being too harsh on the Northern Californian recovering hippy enclave, I should say that I grew up just minutes from there and I know well that the Sebastopicians want nothing more than to be left alone, to live their tranquil, quirky and eco-friendly lives undisturbed under the shade of the redwoods). This is the case for the event that is credited with solidifying the tech world’s thinking on open data. On December 7th of 2007 a group of 30 tech entrepreneurs, academic researchers and policy wonks met in Sebastopol to write a formal definition for the open data movement.
The codes that they decided on weren’t all that groundbreaking:
Open Government Data Principles
Government data shall be considered open if it is made public in a way that complies with the principles below:
1. Complete: All public data is made available. Public data is data that is not subject to valid privacy, security or privilege limitations.
2. Primary: Data is as collected at the source, with the highest possible level of granularity, not in aggregate or modified forms.
3. Timely: Data is made available as quickly as necessary to preserve the value of the data.
4. Accessible: Data is available to the widest range of users for the widest range of purposes.
5. Machine processable: Data is reasonably structured to allow automated processing.
6. Non-discriminatory: Data is available to anyone, with no requirement of registration.
7. Non-proprietary: Data is available in a format over which no entity has exclusive control.
8. License-free: Data is not subject to any copyright, patent, trademark or trade secret regulation. Reasonable privacy, security and privilege restrictions may be allowed.
But, the fact that they created the dialogue around what would soon become one of the main talking points for proponents of internet freedom was significant in its own right. As more and more of the general population gets used to the benefits that open access to data provides and come to understand the dangers of having publicly useful data being hoarded by powerful internet giants, the push for open data has become a popular topic of conversation.
The property industry has had its own push towards open data. Redfin and Zillow are able to syndicate what were previously private residential listings after years of effort and an eventual DOJ ruling against local Realtor organizations and their Multiple Listing Services. Spencer Rascoff, Zillow’s co-founder and CEO, went on the record to say that “Anyone whose business model is predicated on the assumption that their secret data will remain secret and proprietary, that’s not a sustainable business model. This data will inevitably be free.”
But commercial real estate is not residential. Even though sales records are public, just like their residential counterparts, commercial properties don’t change hands often and so rely on lease revenue as the main way to estimate their market value. These lease terms have always been private, a confidential agreement between the landlord and the tenant. This was an asset for large real estate firms or those with heavy geographical or sector-specific focus, giving them the ability to understand the market better than their competitors and conduct the information arbitrage that the property industry has always relied on.