Curation or Algorithm? … David Hume weighs in.

I’m a philosophy nerd. I read quite a bit of the stuff, I listen to podcasts, and I even take online courses from time to time. This is no boast, and I’m no dilettante (those of you who know me get this), but I do dig the stuff and furthermore, I’m convinced that all of us who work in this topsy turvy, madcap, tech circus could periodically benefit from the wisdom of the greats to enlighten us with perspective on the human condition – – you know — “user behavior”. Recently, I’ve been following the debate/ discussion that can summarized as “Man vs. Algorithm” with regards to the filtering and recommendation challenge for digital content.

The “recommendation problem” has been a constant theme in my career and, indeed, its a big part of the internet’s story. How do you filter all of that information so that a single human can acquire the result they desire? “Curation” has been a concept that’s been floated as, if not a sufficient solution, then at least a very necessary component of any attempt to solve this intractable problem. For the purposes of this post, when I say “curation”, I mean a process that involves human editors/curators — I realize that technically, “curation” can be accomplished by a computer, but this over-complicates things, and the really interesting discussion is the whole “John Henry vs. the steam drill” (or… curation vs. algorithm) thing.

Its an even more interesting problem if you perform the following thought experiment: ok, so let’s say curation is the answer – well, what does the PERFECT curation experience look like? Those of us who build stuff often start with these conditions (knowing, sadly, that we will never achieve it) as a means to define the parameters of the possible outcomes.  Turns out, as usual, this debate has been going on, in one form or another, for quite a while, and fundamentally, the issue goes much deeper into human nature… so, David Hume – you’re a smart dude, what say ye?

I’m a big fan of David Hume, and “A Treatise of Human Nature” is, IMHO, one of the greatest practical philosophical works of all time. (I actually think he is the patron philosopher of product and design even more so than Karl Popper, but that’s for another post).  So, I was recently super stoked (see, no dilettante here) when I was educated about a Hume essay entitled “Of the Standard of Taste” while listening to this episode on the Philosophy Bites podcast (which I highly recommend). It turns out that the work is actually quite seminal in the field of aesthetics, while for Hume, it was slap-dash 34 paragraphs he wrote quickly to make up a gap in a book of essays – – must be nice when your throwaway stuff is makes you the poster boy for an entirely new field of study (Hume called it “criticism”, and only later is a area of study labeled “aesthetics”).

The podcast episode provides a really great analysis of the essay, as does this entry from the Stanford Encyclopedia Philosophy. In the essay, Hume defines how beauty in art is defined, and boldly, he proclaims that beauty (you can also read “quality”) is not subjective — that is, not “in the eye of the beholder” at all. For those of you who are as lazy as i am, here is a 5 bullet point breakdown:

  • Hume, the quintessential empiricist, believes that all general rules of art are based on experience, not on a priori knowledge (in opposition to Kant and subjectivism). This means that we derive our sense of beauty and quality through our senses, and it is not innate in us.
  • Questions of beauty and whether a work is “great” or not IS a matter of feeling and pleasure, BUT, that doesn’t mean that there isn’t a “right answer” as to whether something is beautiful or great. He describes it as nothing less than “fact” in this passage:

But if we consider the matter aright, these are questions of fact, not of sentiment. Whether any particular person be endowed with good sense and a delicate imagination, free from prejudice, may often be the subject of dispute, and be liable to great discussion and enquiry: but that such a character is valuable and estimable will be agreed in by all mankind. Where these doubts occur, men can do no more than in other disputable questions, which are submitted to the understanding: They must produce the best arguments, that their invention suggests to them; they must acknowledge a true and decisive standard to exist somewhere, to wit, real existence and matter of fact; and they must have indulgence to such as differ from them in their appeals to this standard.

  • Taste is varied, just like moral sense, but as with morality, some things are simply “better” than others just as some acts are accepted as “moral” or “immoral” within human society.

It is indeed obvious, that writers of all nations and all ages concur in applauding justice, humanity, magnanimity, prudence, veracity; and in blaming the opposite qualities. Even poets and other authors, whose compositions are chiefly calculated to please the imagination, are yet found, from HOMER down to FENELON, to inculcate the same moral precepts, and to bestow their applause and blame on the same virtues and vices.

  • Therefore, we must have a “standard of taste”, just as we have our morality codified in laws.

It is natural for us to seek a Standard of Taste; a rule, by which the various sentiments of men may be reconciled; at least, a decision, afforded, confirming one sentiment, and condemning another.

  • So… who/what defines the “standard of taste”? He has an answer for that, as well, and the answer comes straight from the mouths and pens of “true judges”.  In fact the “standard of taste” is specifically defined as the “joint verdict of the true judges”. He goes on to define how one becomes a “true judge” and how we might identify these chosen savants.

Though men of delicate taste be rare, they are easily to be distinguished in society, by the soundness of their understanding and the superiority of their faculties above the rest of mankind. The ascendant, which they acquire, gives a prevalence to that lively approbation, with which they receive any productions of genius, and renders it generally predominant.

Ok, I’ll break here for a moment just to comment, because to me, this is just great stuff and fodder for many a bar-room argument… there IS a greatest rock album, and it can be legitimately argued. In Hume’s world, music critics (at least the PERFECT ones) aren’t bitter and/or obsequious leaches “dancing about architecture”, rather they are exalted leaders, men of honor, and important social glue for society.

There is a bit of a problem (and some controversy) in his theory, because Hume’s qualifications for a “true judge” are stringent bordering on the impossible – – some philosophers say that, in fact, Hume was really referring to an ideal and that he did not believe that such “true judges” could exist, but we’ll stick to what we can directly infer from the text. I’ll bullet down his qualifications below:

Whether any particular person be endowed with good sense and a delicate imagination, free from prejudice, may often be the subject of dispute, and be liable to great discussion and enquiry: but that such a character is valuable and estimable will be agreed in by all mankind.

  • By “good sense” (he also refers to this as “strong sense” elsewhere in the text), he means that the person needs to intelligent, reasonable, and able to understand the various elements within the subject matter he/she is endeavoring to criticize. The judge must also be able to use their intelligence and good sense to organize their thinking and argue well – – they have to defend their position, because they are “right”. So, for Hume, you CAN lawyer your way into proving that the Beatles are the best band rock band ever… or whatever.
  • Owning a “delicate imagination” (also referred to as “delicate sentiment”) means being very in tune to the subtle nuances and details of an art form. Hume is adamant that in order to empirically acquire this quality, one requires a lot of training, practice, and experience in order to nurture the judgement to make the fine-grain comparisons between works and their components.
  • Finally, one must be “free of all prejudice”. For Hume, this primarily meant that one must be able to rise above any bias derived from allegiance to nation, religion, race, or culture. Yeah, no problem.

So what do we have here, aside from the most pretentious job posting of all time for an editor? I think we find some reference material for framing our discussion about “curation vs. algorithm” and the recommendation challenge. Clearly, if Hume were pressed to answer the “Man vs. Algorithm” question, he would come down squarely on the “man” side, and as it happens, he has a really nice essay to back it up. But deeper than that is his belief that there IS an objective answer — that we can KNOW whether art is good or not.

It’s curious to me that the “joint verdict of the true judges” smacks a little bit of what we might today call crowd-sourcing.  Certainly Stack Overflow and Wikipedia have pretty tried and true mechanisms for separating the “true” from the “untrue” judge – – and when we reach that “joint verdict”, it is sort of starting to feel like pornography – we know it when we see it. What would Hume think of a recommendation engine that delivered subjectively good results, but objectively bad ones… that is – what if you have “bad taste” and Spotify, Beats, or Pandora just keep feeding you more “bad” music? I think Hume would think that is a bad outcome and a useless endeavor… probably even immoral. I think he may argue that your continued spiral into crappy content just makes you and society dumber and worse citizens…. hard to say, but probably not a bad guess.

Certainly, the qualities of a “true judge” would be hard to replicate with computer code — it is hard to think of a programmatic way (short of science fiction-y AI scenarios) to understand the nuance and context associated with a “delicate imagination”. That being said, a computer is going to be a heck of a lot better at “absence of bias” than a typical human, so maybe there is a Humeian curator algo out there yet. The interesting observation for me is that the standard “product to product” collaborative filtering engine – i.e. Amazon – has elements of objective and subjective recommendation. The collective action of millions certainly isn’t Hume’s idea of the ideal “true judge”, but just maybe the result is the same with enough events?

So who would be the patron philosopher of the the “uncuration” … the “give the people what they want and not what they deserve” camp?  Well, I think we look at the opposing positions of the time regarding aesthetics — Hume was an empiricist, and his treatise on taste came from an empiricists viewpoint. So, I would go to two philosophers who stood opposed to Hume back in his day. Immanual Kant, because although he wasn’t, by most accounts, really a subjectivist, he basically spent his whole career and his greatest work (The Critique of Pure Reason) directly arguing with Hume and his empiricist views. His direct work in aesthetics was Observations on the Feeling of the Beautiful and Sublime. I would also look to Søren Kierkegaard, who is basically the poster boy for subjectivism. Both philosophers believed strongly that beauty and quality in art are COMPLETELY in the eye of the beholder, and they are derived from nothing but pleasure and feeling. In their ideal internet, feeding you more crap on top of the crap you already consume is a perfect goal, as long as enjoy it – – but I think we all doubt that path reaches any decent goal and therein lies the rub…

As human beings, we have subtle tastes, a huge dependence on context, and, as Hume would say, “delicate imagination” when it comes to the stuff we love. I love me some bad action movies… i mean some really bad ones… but I don’t love ALL bad action movies, and in fact, I can’t even describe to you what makes me love the ones I love and hate the ones I hate.  Some of my friends understand my choice immediately, and it is because they are free of bias and share my “delicate imagination” which enables me to know that Predator is totally awesome, while The Expendables mostly sucks (again – not a dilettante).

I don’t think the real debate is “Curator vs. Algorithm” — I think that debate exists and is relevant at the tactical level, but I don’t think we’ve yet decided, collectively, what the goal of such mechanisms are supposed to be.  When we design these things, we have goals in mind and we know the technical and semantic limitations of how far they can go, but I know from experience that even the best-case scenarios fall pretty short of what we would like.  I do think that its useful, from time to time, to get our heads out of of the solutions based purely on discussions UX and data science and remember that people who were a hell of a lot smarter than us who had no internet still have important ideas to convey… i’ll leave you with this.

 Many men, when left to themselves, have but a faint and dubious perception of beauty, who yet are capable of relishing any fine stroke, which is pointed out to them. Every convert to the admiration of the real poet or orator is the cause of some new conversion. And though prejudices may prevail for a time, they never unite in celebrating any rival to the true genius, but yield at last to the force of nature and just sentiment. Thus, though a civilized nation may easily be mistaken in the choice of their admired philosopher, they never have been found long to err, in their affection for a favorite epic or tragic author.

Google Reader is (almost) Dead… Long Live RSS!

2013-04-04_09-32-10(Did you notice that I made my “rss collage” header image sepia toned?  That’s supposed to make it look all “old timey” and vintage… anyway..) Google does keep to their word, and they are pretty ruthless (and disciplined) about cutting products – – you’re next Google Reader! At first – – but only for a brief moment – – I was a little disappointed.  Let’s face it though, its not like there haven’t always been a lot of RSS readers out there, and a lot of folks have been ringing the death knell of RSS for a long time – – even before Google made their announcement.

I still like and use RSS, and I’m moving to Feedly (like 3 million other people). The ability to just import your entire Google Reader account, including categories, made it a no brainer – – ain’t creative destruction a wonderful thing?  In fact, I hadn’t cleaned up my feeds in a while, and  their “Organize” UI is really well done – check it out:2013-04-04_09-27-30

Oh… wait – they are gonna charge? RSS is Dead!

I think the claims that Twitter is killing RSS are valid, but my feeling is that a lot of people who started following a lot of media via Twitter had never used RSS in the first place. Twitter isn’t a good substitute for a “reader” (remember when they called them “news readers?”) because most of the publishers end up mixing in a lot of noise with links to full articles and blog posts. The lack of a structured summary isn’t that appealing, either, but I’m pretty sure no one reads that much anymore, so no big. Twitter killed reading… In fact, if you are still reading this blog post, I’m really shocked.

Zen and the Fallacy of Sunk Cost

Productonomics is my own little invention that applies an “economic way of thinking” to technology product development, and the Fallacy of Sunk Cost is, IMO, a highly applicable economic principle. Proper understanding and application not only benefits the products we make, but it benefits morale and company culture.  To give credit where credit is due, Steve Bronstein, a former colleague and a current friend, is really the one who stuck this phrase into my head and made me see the light (he very effectively argued against a position I held by dropping this science on me).  I realized at that point how crucial the concept is in decision-making and strategic thinking, but I also realized how counter intuitive it is.

What is it?

The principle is easy enough to understand, and it can probably be summed up most succinctly by the “dont throw good money at bad” idom, although that’s not all there is to it.  A short definition that I like is “Once the money (or time or effort) is gone, then it’s gone. There’s no point in worrying about this” (from Plonkey Money). This may be oversimplifying it a bit, but there is a ton of stuff out there on the web that can explain it better than I can (links below), and I want to focus specifically on how it can help product development. Most of what you find out there is associated with money management/investing, but the principle holds for any decision-making situation where spent resources come into play.

How is it useful in Product?

Our decisions now only affect the future and have no relationship to the past – –  that’s the jist, and logically, it is not hard to understand this. If I buy a ticket to a bad movie and I can’t get a refund, sitting through the movie doesn’t re-acquire any of the value I have lost – I’m better off walking out and doing something that gives me more utility/return – that’ s the rational thing to do anyway. But now let’s consider a situation that many of us have been in – the team has been working on a roadmap feature-set for months… discovery, design, prototyping – you name it. Its been all consuming and buckets of sweat and tears have gone into it. When the assumptions were made and the numbers were run months ago, it seemed like a home run, but just recently – we either realized we were wrong and nobody wants this thing, or a bunch of external factors changed (and when doesn’t this happen?) or both. What changed isn’t important – – competitive environment, economic environment, regulatory environment, technology stack, legal framework, etc., etc., etc. – – the point is that what to do NOW is what’s important, and what was decided 6 Months ago, while perhaps relevant for knowledge, is not relevant in the decision calculus going forward.

It comes down to two positions:

  • We should just kill the entire project and do something else.
  • We have put too much work into this and gone too far to quit now – we will have lost all of that work.

Well, assuming that the core assumptions are accepted to be true – (that we were wrong or the landscape is no longer receptive)  – choice number 1 is the correct choice.  Now, its important to make the distinction between the “we were wrong” vs. “things outside our control changed”, because each situation presents its own challenges to the people on the team, but let’s be clear that in either case –  bullet #1 is the way to go.  All that hard work is in the past, and without a time machine, we can’t retrieve the resources. All we can do now is make the best decision possible that defines our path from today onward, so continuing to throw resources at something we all feel is doomed is a fools errand.

What about the “Zen” Part?

Well, that all sounds fine and dandy and rational and what-not, but here’s the rub… the rational decision is likely to be unpopular, and if the culture of the company doesn’t allow for failure to be acceptable, there is going to be political face-saving shenanigans, finger pointing, and dogmatic group-think takeover. This quip from the Skepdic Dictionary says it all:

To continue to invest in a hopeless project is irrational. Such behavior may be a pathetic attempt to delay having to face the consequences of one’s poor judgment. The irrationality is a way to save face, to appear to be knowledgeable, when in fact one is acting like an idiot.

Ouch, but true… and I’ve been there… oh, have I been there. What’s the answer?  Well, that’s where the “Zen” part comes in – its really about the company culture from the top down. We can’t be afraid of failure or creative destruction (we’ll be talking a lot about ‘cd’ and Schumpeter in the future), and this is the essence of the tech start-up, is it not? One of my favorite business books, Getting to Plan B, is devoted entirely to this concept: adapt and overcome.  Well, you can’t adapt if you don’t make mistakes and then recover from them quickly, so understanding this fallacy and injecting that understanding into the culture of the team is important.  The “Zen” part is being able to be in the moment, acknowledge that a change in direction is the best course, and let go of the past – “bend like a reed in the wind” (yes, that is a Dune quote). To sum this one up in manner in which most people can relate, how about this quote (relayed by my friend and former colleague, Mike Tatum from Whiskey Media):

Would you rather be rich or right? – a very wise (and probably rich) man

Don’t Take it Too Far

Just like any knowledge, this can be misinterpreted or taken too far, so to be clear, this principal does NOT propose that we do not or should not learn from the past. Knowledge and experience, of course, should shape our decision making. Also, if I am to be fair, the product scenario that I described above is really referencing a “Sunk Cost Dilemma“, and there is a significant distinction. The “sunk cost dilemma” adds a more realistic framework for real-life — specifically introducing uncertainty and multiple events. In my example above, I said “assuming that the core assumptions are accepted to be true – (that we were wrong or the landscape is no longer receptive)“. Well, the reality is that we probably aren’t often that certain that the “we were wrong” assumption is true, and so we are only making the best decision we can with the information we have – thus, uncertainty. In a Game Theory context, these decisions are made multiple times, and if you make the decisions based strictly on a calculus of ONLY looking at open costs (not sunk costs), you actually get into a situation where the aggregate of the each “correct” decision creates a negative result over time.

As decisions are only made considering open costs but not sunk costs, each single decision is computed to be beneficial. But in the end, the overall payoff of the project is negative. While the project progresses towards disaster, the decision not to go on with the project gets more and more unlikely. The project is like a train: once it has been put on a track, it is very difficult to change its direction.

I’m gonna go out on a limb here and say that I think a fundamental understanding of the sunk cost fallacy is still incredibly beneficial, and this scenario, in a way, argues the same broad point that I am making – – momentum for momentum’s sake is not productive, and realistic, honest decision-making is good.  Essentially, this dilemma is created by “too much of a good thing”, and understanding the fallacy is just another tool in the box… its not dogma. Oliver Lehmann’s “Visionary Tools” offer some insight into how to avoid this dilemma, and I find them to be good common sense tactics – nothing we wouldn’t already be trying to do anyway (short reporting cycles, defined roles, proper interpretation of data… common sense, in short).

Say It Loud, Say It Proud

The best place to start changing culture is with yourself, and this is a process.  Just working in a “but… i could be wrong” now and then is a good start.  Saying “I WAS wrong, and here’s what I propose we do about it” in front of a room full of people is perhaps the ultimate test. Ultimately, if the nature of the relationships with your clients, customers, co-workers, and investors is honest, humble, and thoughtful, then saying “I’m wrong” means that you don’t really have to say “I’m sorry”.


A not really review of ‘The Social Network’

Ok – confession time:  I didn’t see the movie, I don’t plan to, and this isn’t a film review at all.  I am merely using the subject matter of the film as a device to further what, for me, is the more central and important lesson of this tale … and that is that ideas are cheap.

I will defer to the wise and eloquent Lawrence Lessig to provide a nugget of wisdom in a way that my mere mortal brain cannot.  The following is a quote from an article/review he published in the New Republic about the “Facebook Movie” – Its an amazing bit of insight:

But from the story as told, we certainly know enough to know that any legal system that would allow these kids to extort $65 million from the most successful business this century should be ashamed of itself. Did Zuckerberg breach his contract? Maybe, for which the damages are more like $650, not $65 million. Did he steal a trade secret? Absolutely not. Did he steal any other “property”? Absolutely not—the code for Facebook was his, and the “idea” of a social network is not a patent. It wasn’t justice that gave the twins $65 million; it was the fear of a random and inefficient system of law.

Whatever your ideas are about intellectual property, or whether or not you feel sorry for the “downtrodden” defendants in the film, execution is what matters, and real, successful execution isn’t just technical prowess – it requires a deep set of skills like vision, perseverance, and adaptability (for advice on how to acquire these illusive skills read Eric Ries – but, I digress).  When Zuckerberg launched  “Facemash”, the pre-cursor to Facebook, it was Oct 2003. I had been working on the internet and related fields,  at that time,  for nearly 6 years, and I can tell you — “social networking” on a large scale was going to happen —  it was already happening in many proto-FB  forms.  Also, I can’t think of a product where the Network Effect has more of an impact on success than for a social network, so the winner was always going to be the one with the most users, period.  Lets not also forget that Facebook improved and became bigger and better BECAUSE of its users (again… network effect) – – the product grew and adapted, not because Mark Zuckerberg gave glorious birth to every feature in advance of demand – –  but because he was smart enough not to.

So cheap, it can’t be good.

Here is a common cheap idea life-cyle that I’ve seen occur so many times within my own world of digital media:  The user AND company expect “A” which is appealing, if it works, but it doesn’t work, because the business model / the marketplace / the legal system / current technology don’t support “A” – the company proceeds anyway, because they still cling to the possibility of reaching the unrealistic, but appealing “A”,  even though there are significant barriers out of their control or lack of funding or whatever.

Truly good ideas, IMO, have depth … a path … a unique signature.  In a way, an idea is like a snowflake, and initially, only the owner of that idea has that version. To take this idea to the extreme, I could say that I have an idea to build a time machine.  Hell – from the demand side, that’s a smashing idea… just think of it! The fact that is violates the laws of physics (well, maybe not technically, but that’s for another discussion) makes it absurd, but even beyond that, if you were to press me on my thoughts on getting from A-Z (concept to launch), my only option would be to grin back at you and say .. “A Time Machine, Man! Just Think of it!” Its not a bad idea… its just an extremely cheap idea… so cheap as to be asymptotic to 0.  The point is that give me any one or two sentence ideas, and I have no doubt that there are probably hundreds, if not thousands of people who have thought of this “half-idea”, but it is the entrepreneur with the vision and insight to see a real path to execution that makes the idea into a good one.

Zuckerberg Jr.

Another internet magnate who has been much maligned for NOT having ideas, but rather, copying/stealing ideas is Mark Pincus of Zynga. Now, don’t get me wrong… there are some lawsuits out there that accuse Zynga of literally stealing code and doing things that violate contracts and break the law – I don’t condone this at all… if they are found guilty, they should suffer the rule of law, but I do think they get a bad rap for some of the “copying” charges. I found this anonymous gamer post on message a board pretty salient:

Zynga was pulled into court for making Mafia Wars, by the original makers of Mob Wars. Mob Wars is an inferior and much lesser-known game to Mafia Wars, even though Mob Wars came first (which is sad). Zynga just has an amazing dev team and knows how to market.

That being said, Zynga is a total copy cat…they just cross-promote the hell out of their games, which makes them so popular. At this point, they’re crapping gold…no matter what new game they make, it becomes an instant hit.

I don’t know scientifically if this is the “general feeling” out there, but if you take it on the face of it, and lets just say “ok – he’s right” – well, Zynga is good at developing and good at marketing… 2 pretty darn important pieces of the puzzle, don’tcha think? I think the really sad thing is that the Mafia isn’t getting their cut… I mean, weren’t they what made these games possible in the first place? Maybe Coppola should have sued Scorcese for making Good Fellas. Being a “copy-cat” in the sense described above may be seem unethical or anti-innovation, but I disagree on both counts, and in an innovative economy, its absolutely necessary.  The internet and software is all about copying, iterating, and improving… the whole industry evolved this way – its Hayek’s  emergent order in its most virulent form.

So, I’ve given up on my time machine, but there are idea men out there who never stop…

Head in the Cloud

Readwriteweb‘s recent  addition of the “Readwrite Cloud” blog has been a real godsend for me and probably others who are faced with the task of creating products with “the cloud” in mind.  It seems like every product discussion I have, overhear, or read about these days involves a cloud component.  How can we “use the cloud” or how can we “cloud-enable” a product isn’t really how that discussion starts, though, and I think that it’s worth getting past some of the technical considerations and discuss what the cloud means really means for users and those of us who seek to please them.

Much like the now unfashionable and perhaps defunct “web 2.0” label, “cloud computing” can mean lots of different things, but I think it is fairly simple if you look at it from the perspective of the end-user (as us Product people should always try to do). For me — in a nutshell – it boils down to a “where/how/when” question of access to data, content, or applications. Take this interesting company, for example:  As far as a “cloud” play for media, it can’t get any simpler, and there are lots of options for users for this kind of service. So how do they differentiate their product?  Well, its a subtle but interesting twist, pointed out here in this article on TechCrunch:

… the service includes a slightly different twist-ZumoDrive tricks the file system into thinking those cloud-stored files are local, and streams them from the cloud when you open or access them.

That might seem like a strange feature, but if you think about it, its terribly clever — users, even savvy users, aren’t yet completely used to/ and/or comfortable with the “cloud” concept, and by mimicking a boring old file-system, they bridge the gap between the users expectation and the real value of the service.  So, this got me to thinking… what’s a good short list to think about when you are developing products for the cloud?  Well, here is a short list…  I’m sure I’ll change my mind about what should be here as things progress, but this seems to me to be a good starting point.

  1. Access – This is an obvious one, but it can’t be understated.  Users will use the cloud because it gives them access across time and space (whatever device, whenever they want)
  2. Synchronization – While this is related to “Access”, there is a subtle difference.  Synchronization means that if I DO something or change any data via any interface or device, that change MUST exist across all devices and interfaces as soon as possible.
  3. Trust – This is a big one, and recently, a lot of the bad publicity Facebook has been facing is driven by this issue. Users have to trust that their data is safe and private, end of story.  Something to consider here from a product standpoint, is allowing users to back-up their data. Also, companies should make it their policy in their EULA that if they go out of business, each user will receive a back-up before the doors shutter… that would be a big one for me if I was going to do anything really important in the cloud.
  4. Openness – This is related to the Trust issue, but its slightly different in that what I rally mean is creating products that can share data with other applications and services, but also to keep the barriers to moving data entirely to another service simple and easy. Just like customer service 101 says that you are never nicer to a customer when they are leaving, the same should hold true for cloud-based services.

I recently read this quote somewhere, and unfortunately, I didn’t Evernote it (no attribution… so sorry!), but I did remember it.  I think its a good one…

Cloud Computing is not about Amazon, Its about how you reach your customers.