Gladwell’s illusion of choice

June 30th, 2009

Malcolm Gladwell seems to confuse cause and effect, to his argument’s undoing. He ignores the subtlety in the phrase “information wants to be free” that he quotes. For completion’s sake let’s take a look at the entire, original quote as it was said by the founder of the WELL, one of the first online communities, over two decades ago:

“On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.”

Many aspects of “information” are already free: conceptually it’s no more controllable de jure than trying to prevent gossip or outlaw singing. Humans already share information freely, when they talk to each other in close proximity. As technology makes other types of information sharing just as easy as talking in close proximity, and just as much a non-zero-sum, it is human nature to share it endlessly.

To Gladwell, it appears that there is a choice. Do we give some information away for free, or do we put some price on it, and pay for our information power plants. This is clearly the wrong way around: some people are still willing to part with money for information because they have not found it easy enough to get it for free. They are not really interested in paying for the distribution of information; they may be interested in paying for its existence in the first place, however.

But who pays for it, and is it specified as a direct, per-item price, or something entirely different? Gladwell is fixated on seeing “free” as a $0.00 price. But when the government funds basic research at universities it is paying for the creation of information, which will be distributed effectively free. When a Hulu user pays with their time and attention during a commercial break, they get their content for free. There is always a tradeoff, but there does not need to be a direct renumeration every single individual distributed unit in order to satisfy basic economics.

Gladwell fails to accept that the rules may just change, and that consumers may not be the ones directly paying for what they consume. Or that the consumers may not pay the content distributor and content creators according to the mantra that the information is the most important part. I would argue that sometimes the consumer makes a determination that the mode of data delivery is worth some money: witness the Kindle. In other cases, they are willing to trade time to get information for free, as happens with movies downloaded over BitTorrent. Consumers have already decoupled their view of the information flow into separate steps of creation and delivery, and they deal with them separately.

In the perfect world, information would be available to anyone who wants it, and the creators of such information would get enough of a financial incentive to create and continue creating, proportional to the aggregate value of their informational assets. Unlike physical goods, a creator does not lose or gain any physical, scarce resources from a copy of the information, nor is there usually any opportunity cost.

Creators need to accept that as communication channels open up, and as inter-human connections become faster, easier to establish, and cheaper, these connected humans will not want to be slowed down by an argument that they should pay for something just because of artificially-induced scarcity. Setting a speed limit in outer space is imbecilic. Instead, creators need to find a way to get compensated through means not connected directly to distribution. For newspapers and drug companies, this may be grants or governments. For television and music creators, this may be flat usage taxes or revenue sharing. But distribution of information — it will always flow by the path of least resistance, from one person to another, whether we want it or not, always towards the free if possible.

Gladwell’s argument had rested on maintaining control, and having a choice of giving something away for free — or giving it away for some other price. Really, the only choice left to a content creator in the long term is to give it away or not give it away at all. Once released, their output will end up as free, to some people at the very least. And unless there is really an alternative with no more movies, and no more books, and no more paintings, the creators will just have to deal.

an ilya, partly cloned

December 12th, 2008

For those not in the know — Linda and I had just had a kid. you can read about our road so far at http://growingbaby.org

BarcampLA-6 presentation

October 25th, 2008

This weekend is the sixth instance of BarcampLA, and for the fifth time I’m presenting. This time it’s a somewhat technical topic - multithreading. Here are the presentation slides:

OpenTweet - distributed twitter-like service?

May 3rd, 2008

Scott Hanselman in this tweet (and on his blog) proposed a sort of a distributed tweet server to serve as an alternative to Twitter. He points out that this has been proposed before, and even implemented to some degree.

On the surface, Twitter is a feed of messages — combined with many ways of getting messages to the service, as well as many ways of getting messages from the service, plus a framework for subscribing to others’ feed and seeing others’ friends’ feeds.

A single user can certainly host their own feed, and (via some soon-to-be-established) microformat link to this feed. Assuming that we discount the issue of maintaining your own feed, how will people find your feed? How will they subscribe to it in a way that incorporates the rest of their friends’ feeds? How will replies and direct messages (both excellent features of twitter) be implemented in such a model? Let’s dissect these in more detail.

Finding a feed

The low-tech, least-usable way of doing this is, of course, to just tell the reader to find the publisher’s site, find the tweetfeed link, and add it to their reading software. In practice, this won’t really work: nobody wants to search the net to find your tweets. My suggested solution: a set of web sites where users can register; a federation of registration servers, of sorts. So ilya@mytweetserver.com may be my default indexing location; yours may be johndoe@leettweets.net. If each server publishes a list/directory of nicknames (usernames) in a standard format, we can build search engines to search these registries. In theory this will mean some networks will be effectively private until a search engine finds them, but such is the normal mode of Internet operation anyways.

Subscribing and reading

Feed consumption need not be anything fancy. Something akin to online or offline RSS readers will be enough, albeit probably optimized for very short messages without bodies. When you subscribe, you subscribe to the tweetfeed of the publisher; your reading software will integrate all the tweets into a single, coherent timeline.

Replies / direct messaging

In order to support replies, we need to determine how we will address users, and how we will deliver the messages. I’m commingling replies and direct messages here as follows: to me, replies are public messages plus notifications to the target user that a new reply has been created, while direct messages are notifications to the target user that contain the reply, and the reply is not posted to the feed.

So how do we address users? The @user syntax is a very convenient one, so we can try to piggy-back on that by having @user mean user@SERVER — and the SERVER is my default registration server. If I want to address a user on another server, I can always use the fully-qualified @user@someserver.com. So for example, let’s imagine that there are three users in the world: john@serverA, jane@serverB, and jeremy@serverA as well. When john wants to reply to jeremy, he simply needs to start the reply with @jeremy, since jeremy is a user on the same server. When replying to jane, however, john would need to start the reply with @jane@serverB. Perhaps client software can make this easier via address book lookups. When john’s tweetfeed is published, the feed contains john’s default server name, serverA. A client consuming the feed would know how to resolve the usernames, and thus where to look up these other users’ feeds.

So that’s addressing, now what about delivery? This may be approached much like trackbacks: a standardized way of submitting a short snippet of data to your registration server. In a way, your registration server is your notification server. Some servers may be configured to forward the message on to your own tweet feed engine as an actual comment-like post; others may email you; others may publish a reply-feed that you may then consume with your own reader. This will then solve the problem of the privacy of direct messages as well — your private reply-notification and direct message feed will only be known to you.

Finally, we need to address spam. In a distributed system, combating fake-reply or direct message spam may not be trivial at all. My initial thoughts are to a) force reply messages to include a computed hash of some value from your feed, or b) have the reading software filter out any messages from people that it knows aren’t your friends, and requiring your friends to include a public key in their feed and to sign their messages with their private key.

Conclusion

This may all be a bit overkill, but on the other hand may result in a nice, friendly Twitter-like service that doesn’t really lose any of the functionality that makes Twitter so attractive and even gains some capabilities to create private networks.

Oh yeah, and follow me at http://twitter.com/haykinson :-)

OLPC and the world economic imbalance

December 25th, 2007

In ways typical of his usual punditry, John C. Dvorak decides to bash the One Laptop Per Child project. He reaches into his bag of over-the-top comparisons and sly turns of phrase, and argues that the laptop project is just the rich West being guilted into doing something for poor Africans and choosing this totally inappropriate remedy.

The British pundit Bill Thompson answers Dvorak and pokes holes in his argument — and does a wonderful job of bringing a dose of reality into the conversation.

The argument isn’t the first one out there. The debate about the OLPC has been going on ever since Nicholas Negroponte started talking about the project. It’s been both praised and criticized on aspects ranging from its technology, its cost, its educational value, and every single other conceivable aspect of its existence.

The strongest argument, however, is not one based on the OLPC at all. While talking about a green computer for kids, we are really getting in the middle of a long-standing debate in the West about how to approach post-colonial Africa and other developing nations. Is it right for the West to patronize the developing nations? Should we leave them to figure things out for themselves, even if it takes a hundred years? As people living in industrialized nations we feel a strong sense of injustice: our life expectancies are reasonable, our purchasing power is providing us with an ever-growing material wealth, and we have risen from working to provide for sustenance to being able to hold meta-discussions about our societal goals — yet, we have done a lot of this by letting others remain in the dark as we took their natural resources and at times decimated their populations.

With the world’s wealth severely out of balance, the self-conscious West wants to find ways to help. It tries sending aid – food and clothing, technology and money – most often with no long-term benefit. With aid efforts coming and going without any noticeable impact, it’s not surprising that people like Dvorak grow weary of yet another way to ram an unasked-for gift down the throats of people who most definitely like food first, and green laptops later.

The world’s resources may be seen as a zero-sum game, and the West has been playing with the developing world as if the game was exactly that. It typically talks about help to the developing world as transfer of wealth in some way, be it food, medicine patents, or even free military help to deal with internal political conflicts. What the West fails to realize is that in order to truly solve the imbalance of wealth we would have to somehow radically re-allocate our resources. This means that the West has to become really, really poor relative to its current situation. If the Gross World Product is about $65,960,000,000,000 (that’s about $66 trillion), and there are about 6.6 billion people in the world, the average per-capita GDP should be about $10,000. That’s four times less than the per-capita GDP in the United States, less than third of the per-capita figure in France, Malaysia would have to shed nearly a third, and even folks in Mexico would have to sacrifice a little bit. I do not think that the world, and especially the West, is ready to accept that kind of a transition. Indeed, even if we wanted to re-balance the world in a less-egalitarian way and, say, created a 4-to-1 disparity between the richer and poorer, we would still have to bring the per-capita figures to $16,000 in the 30 to 40 different Western countries.

Assuming that there will never be political will in the West to voluntarily become poor in order to empower the developing world, the only solution is to think of economic growth for the developing nations as acquiring new abilities rather than simply receiving transfers from the rich parts of the world. This is where projects like the OLPC come in. With a relatively small investment from the West — mainly comprised of technical and organizational know-how required to put together the laptop, its software, and its manufacturers — the West creates a tool that may empower some creation of new wealth, and new prosperity, in the developing world. Education and other stimulants of the information economy have the power to create a set of people who will contribute to the world — and their local society — in ways that would not be possible if we were simply providing expendable food. If a million laptops are distributed, and 700,000 of them crash and burn, and a further 200,000 are stolen, that still leaves a hundred thousand kids who are exposed to the Internet, the information economy, computation, software development, hardware engineering… you name it, in some way or another. That knowledge is not priceless, but it will last a lot longer than temporary insufficiently-sized one-time wealth transfers from the ashamed West.

In 2002, the president of Pakistan, Pervez Musharraf, was speaking to the Oraganization of the Islamic Conference’s committee on Science and Technology. In his speech he pointed out the great difference in educational disparity between the entire Islamic world, and the West (he actually used the example of the entire Islamic world having only 430 universities, and the country of Japan alone having 1000). He urged his colleagues — other heads of state — to invest in higher education as a way of lifting the member nations out of the relative darkness, as he described the situation. He even declared a jihad “against illiteracy, poverty, backwardness and deprivation”.

Mr. Musharraf’s words are encouraging. That a leader of a nation deeply in the midst of poor, developing nations acknowledges at the highest level the need to encourage and develop education as the primary method for getting out of the economic imbalance indicates at least some understanding of these same forces of creating long-term prosperity. The knowledge and information economy stimulation can also occur via other means than building universities: with methods such as the little green laptops, which cost the West fairly little but provide the developing world with a platform to build their future.

Now we just have to hope that Mr. Musharraf and others like him continue to believe in this principle, and begin this investment in their future in earnest.

Great job found

December 25th, 2007

By the way, I now work at Hulu, and totally loving it. If anyone else is interested in a job, and can stand up to the challenge, I definitely welcome emails or comments to this post.

Looking for a great job

September 11th, 2007

After almost 6 years of working on the product, I’ve resigned from Kareo. It’s been a great journey through growing as a software developer and improving my architecture skills, spending almost two years as the manager of the development team, and in general seeing a real business grow from its foundation to something real. However, it’s now time to move on.

So to that end I’ve begun looking for a new place to work. (Here’s my resume, by the way). I’m not totally sure where I’ll end up, but I know about a few things that excite me (both on the business front as well as the technological front):

  • In general, startup companies with solid business ideas
  • Current (thrilling) changes in the .NET world (i.e. LINQ and other C# 3.0 features)
  • The Mono Project and the availability of .NET on Linux
  • Silverlight and Moonlight, as well as XAML and XUL
  • Ubuntu’s success in creating a thrillingly useful Linux desktop
  • Solving the many “App 2.0″ challenges of bringing desktop apps towards their web brethren
  • Agile teams of smart people

If you know of places that are great in some way, let me know.

Hooked on Who

September 3rd, 2007

For the last few months I’ve been completely addicted to Doctor Who — I bumped into an episode on PBS, and ended up watching all of the new series (all three seasons). I then watched Torchwood (the “adult” spinoff), and have even started watching the entire old series. To that end, I’m keeping a blog, http://hookedonwho.wordpress.com, to chronicle my efforts — and my impressions.

Overall, I’ve got to say a giant Thank You to two entities: 1) Peer to Peer networks, which provided me with the the second and third seasons of the new series which was just not available in the US yet, not to mention Torchwood which is only starting in a few weeks having aired in the UK months ago, and 2) YouTube, whose users diligently upload old serials that are only available on (hard to find) VHS tapes, making it possible to watch almost the entire old series.

Comparative Subway Systems 101

November 14th, 2006

It’s really strange to observe BART as compared to the Moscow Metro.

BART seems to be very utilitarian, unfriendly to the visitor. The map is a bit confusing (where are the stations exactly? the lines cover up quite a swath of the city), the lines are confusing (why are there four different lines going in the same direction over the same track in the city?). It’s even less clear which lines the trains are actually from, when they pull up at the platform — there’s an announcement of the final destination, but one has to look at the map to really tell which train you really want. Finally there’s what’s announced in the train itself: basically, nothing. You can kind of hear the name of the station you just pulled up to, and then really quickly an announcement of the destination of the train again. And the cars have no information on the routes you are following — definitely not on the current route, and not even a system map that’s obvious.

Compare with the Moscow Metro. Trains to different destinations almost never arrive at the same platform at stations — you always know where the train is going based on the platform you’re on. Each platform indicates which stations are still in the direction of travel, so that you can calculate how many stops you need to wait till you should shove your way out of the car. Upon arriving at each station, the system announced “Station [name of station]. Transition to station [name of other station] on the [name of line of the other station]“. Before the train leaves from each station, the system announces “Warning, doors are closing; the next station is [next station in direction of travel]“. If you’re on a line for the first time, they’re making it very obvious what’s about to happen. If you’re there every day, it’s annoying but possible to tune out. Each car has a system map next to almost every exit door, and a map of the current line above almost every exit door as well.

In addition to San Francisco, I’ve been to subways in LA, Chicago, NYC, several cities in Russia, several cities in Europe, and in Argentina. I think the BART system really gets low points for newbie usability.

Wireless impressions

August 23rd, 2006

There are a few defining technology moments for me. The first was when I saw a computer for the first time (I remember playing Frogger and Moon Patrol). The second was when I got a 486 as an upgrade to a PC XT; I was just not prepared for the speed increase. The next was when around 1995, when I realized that a three-block pedestrian street near me had a website. If a street could have a website, then the future surely had arrived.

I think that I had a fourth defining technology moment the other day. I got my cellphone to stream music from the Net. It’s not a major achievement, but here I was, holding a little box in my hand, receiving more data in a few seconds than I could have fit on any storage device just fifteen years prior. I could go anywhere in my home with this box, and my music — some music — would always be with me. And it’s not really the music that got me — after all, you could do this with radio for decades now. It’s the fact that I really do now have the jukebox-in-the-sky; I can listen to exactly what I want and where I want, without wires and with just my personal helper device. It’s also the fact that it wasn’t just music that was streaming; it was 40 kilobits per second of highly compressed data. I was connected; I was really part of the network. I took this further the next day: I plugged my jukebox-in-the-sky into my car’s auxiliary stereo port. Listening to my streaming station at 70 miles an hour, I was on the Internet, still part of the network.

I was receiving. This was definitely the fourth defining moment for me.

Windows / RSS

August 6th, 2006

Microsoft’s new version of IE will have RSS feed management based on the new Microsoft Feeds API. This API is also going to be used by the upcoming Office 2007, notably in Outlook.

This is actually a great development. I’ve been a big fan of using mail reader based feedreaders, and totally love the fact that now I don’t have to use third party software. What I’m not happy about is feed synchronization.

I fairly regularly use three computers: a desktop at work, a desktop at home, and a laptop that I bring back and forth. I run Outlook 2007 (beta, of course) on my laptop. However, I sometimes want access to my feeds on one of the other machines, whether because I quickly want to check one of the blogs, or because I find a site that I want to add to my Outlook reading list. But, being on another machine, I’m out of luck.

Outlook makers thought of this, somewhat. Since feeds get delivered into Exchange folders, I can actually just read the posts being retrieved by my laptop from wherever I can run Outlook. But what if I don’t want to run Outlook everywhere?

The solution has got to be feed synchronization. A simple service can receive OPML files containing your blogroll and then synchronize them across Windows devices. Run a small client software, and it keeps track of your feeds — updating any clients with changes from other clients. All your machines would of course start checking all your feeds, but that’s a small cost to pay for keeping your computing environments synchronized.

Requirements for such a service:
  1. Create and manage an account
  2. Support initial import of an OPML file
  3. Configure itself as either a reader only, or full synchronizer
  4. Regularly query the service and update the local Common Feed List with new feeds
  5. Regularly query the local Common Feed List and upload any changes to the service
  6. Support export of OPML file

It would really be best if a service like Bloglines just built an API for this; they already have OPML capabilities and have a feed-savvy account base.

Thoughts, anyone?

Those darn wikis

June 21st, 2005

The other day, LA Times launched a concept they called wikitorial — basically, they started a wiki, put a professionally written editorial on it, and link to this wiki from the newspaper’s site’s home page. Then, as far as anyone can see, about a day later they shut the site and blamed vandals for the closure. The blogosphere has been laughing, in a way, at the LA Times folks who couldn’t handle the heat. I think that they’re wrong to laugh.

Shortly after the wikitorial was put up, I found myself on the LA Times wiki — they called it LATWiki. This to me looked like a crippled version of the MediaWiki software that runs Wikipedia and Wikinews. Crippled, because some things I expect in the software, like the Recent Changes list, or any sort of community pages for coordination of editing efforts, were very clearly absent. No village pump, no water cooler.

After just manually going to the recent changes page, two things became obvious: 1) there was some vandalism going on, and 2) Jimbo Wales was online, fixing things. Between the two of us, we then moved some pages around, introduced some navigation, created a bit of space for collaboration, left instructions for newbies, and kept an eye on vandalism and reverted it. A few other MediaWiki-savvy folks dropped in over the course of the day to tidy things up.

Where were the site’s administrators? Probably watching what was happening. After Jimbo and I stopped minding Recent Changes, the admins banned a user or two, and reverted some changes. And eventually had to go to sleep — which probably quickly resulted in vandals changing the site enough that the only way they knew to cope with it was to close the wiki.

And then the emailed us, asking for advice. Which both Jimbo and I have been very glad to provide. The people at the LA Times who were responsible for the wikitorial really want to do this right, but my feeling is that they simply didn’t yet know how to properly run a wiki. The terms of service were horrendous, the community-building was nearly non-existent. Even vandal-fighting tools like Recent Changes were not easily available.

In my communication with the LA Times folks in charge of the project, I recommended the following:

Running a wiki requires a very simple formula. The site has to have a purpose (which yours does). The folks who sponsor the site have to be well-known and accessible (that means that you have to be involved on the site — make user pages, respond to things posted on your talk pages, etc). The site’s visitors have to be given responsibility — out of all your visitors, a small percentage will get interested enough in keeping your wiki going that they’ll see it as their wiki: accept their behavior and encourage it! Let the users set all the policy for content, navigation, language, attribution, etc. And reserve the right to rule by fiat, but use it very, very, very rarely.

I think experiments such as this one have to be encouraged, and not ridiculed. It takes a lot to move a mainstream media organization such as LA Times in the direction of the wikisphere and the blogosphere. The very nature of a newspaper is to want to be insular and to bind every published word with some strict legalese. It is the nature of a wiki to be as open as possible and to resist limitations.

So why did the wikitorial come down, and what does it mean? I argue that it wasn’t because of vandalism per se, but because LA Times wasn’t yet ready to start a community. They weren’t ready to trust random users enough to make them site admins. They weren’t ready to let users form policy.

And it’s ok that they weren’t — heck, at least they even tried this experiment. Most other large organizations haven’t, at all. What we as wiki-savvy, online community members should be doing is to give constructive feedback on building such a community, on fixing terms-of-service problems, on making the site work when it comes back up. And then maybe after a few years we’ll successfully see a mainstream media wiki that is open, thriving, and accomplishes its goals.

News about Wikinews

April 30th, 2005

A few months ago I was interviewed by a reporter about Wikinews. Unfortunately it turned out that the magazine killed the story after the reporter had come out an interviewed a good number of Wikinewsies — so the story is now posted on a blog. Read the story now!

Nova Smoked Salmon

April 10th, 2005

I’d eaten some nova smoked salmon today, only to bump into a food safety story at Wikinews saying that a brand of nova smoked salmon was recalled for being tainted with some evil germs…. I had to dig the food wrapper out of the trash to make sure I hadn’t eaten of the recalled brand.

Talk about a scary story. I don’t even eat smoked salmon all that often.

Economist: customer is king

April 1st, 2005

The Economist’s article about customer really becoming king is an entertaining read. Basically, the advertising industry is faced with a new reality that the customer is now so educated and in control that that the models of selling brands have to change. (via Scoble)

Update: Thanks Ofer for the tip: link fixed.

Yahoo releases a Creative Commons search

March 24th, 2005

Yahoo just released a Creative Commons search — it lets you find web resources that are licensed under the various Creative Commons licenses. Way to go, Yahoo!
(read the annoncement on the Yahoo search blog)

Microsoft imitating del.icio.us?

March 11th, 2005

del.icio.us is a very good social bookmarking site. It’s both a way to keep your online life organized, a place to see what others are interested in, and a method for information lookup.

It looks like this point isn’t lost on Microsoft. The MSN Sandbox site start.com seems to have a playtest version of an online bookmark management site — without the social aspects yet, though, but that could easily follow. The new site also seems to integrate with RSS in some ways. And only works in IE, for now. But it’s a “start”…

This is the second project from the sandbox site — the first, start.com/1, is an online RSS reader. This one is at start.com/2, and I suppose many people are pretty curious about what’ll pop up on start.com/3

Yahoo’s Search API

March 1st, 2005
Yahoo yesterday released their Search API. I’m joining the chorus saying that this is a Good Thing, for a few reasons.
  • The application key is a really good idea Some applications like Google and Flickr require developer tokens. While a good way to regulate who uses your application, and curb abuse, the application token holds an advantage for a developer: it allows the application’s source to be distributed in a working state. For source-only applications (such as websites or scripts) a developer code is not possible to hide. Since the companies typically don’t want you to show people ”your” code, people have been distributing the source code with “INSERT_YOUR_TOKEN_HERE” instead of the actual token. In Yahoo’s model the application is easily distributable. The downside is of course that Yahoo has no way of knowing if I just stole an application code from someone else’s app. It also would allow the developer to register for a large number of application keys and query Yahoo using those keys in round-robin fashion. These are curbable abuses though, and the benefit to developers likely outweighs the downside of server management.
  • REST API While REST is not the best way to access data (SOAP is definitely more structured, more standard), the REST model allows people using languages without good native SOAP support to query the service. I think it’s still really important to offer a SOAP interface, however — those of us using platforms with good SOAP support dislike having to hack around REST.
  • Developer communication Unlike Google’s somewhat marketing-rich blog, the Yahoo folks actually have a real developer blog. They also have a Wiki — nice!

Add a high query limit to this list, and you’ve got a great service that definitely moves the web service evolution of the Net forward.

CVS Commit RSS Writer

February 2nd, 2005

I’ve written a utility that syndicates CVSNT commit logs as RSS.
The idea is that it’s a lot easier to subscribe to an RSS feed of commits than to manually hunt through the repository, or to use a similarly-hokey commit emailer.

The software is still kind of raw, but if you are really missing this functionality then it might come in handy. It is licensed under the GPL.

Daft Punk - Human After All

January 28th, 2005

The new daft punk album is quite disappointing. Even “Discovery” wasn’t as good as “Homework” though it had a few really nice tracks. but the new album “Human After All” has completely failed to impress me on first listen.

This is really too bad, too. because I think Daft Punk is (was?) a truly revolutionary group.