I'm on a Delta flight equipped with GoGo in-flight wireless, and they
have an interesting campaign going on: free Twitter for all. It's a
pretty slick campaign, but I think it raises interesting net neutrality
issues because, in essence, Twitter is paying for preferred access.

Personally, I'm ok with it: I don't have any problem with an internet
carrier creating a "fast lane" that either side of the connection can
pay extra to use, so long as the lane is made equally available to all
comers, on the same terms.

That's not to say that all advertisers are required to accept
advertisements from all organizations — I'm not excited about it, but I
wouldn't outlaw GoGo from accepting an ad for the Catholic Church on the
GoGo website while refusing an ad for atheism. As a publisher, GoGo can
choose what message to put on its own website, even if that message is

But as a communication medium, GoGo shouldn't be allowed to grant free
access to websites hosted by the Catholic Church, while simultaneously
refusing the same deal to an atheist organization.

I understand it's a tricky and totally arbitrary line, but I think
content-discrimination should be legal (to enable free speech), while
communications-discrimination should be outlawed (to prevent restriction
of free speech).

I think too much of the NN debate is wrapped up in thinly-veiled
anti-corporate fearmongering (the little guys need to be protected from
the big guys!!). Even if it's a fine goal (and I don't think it is), it
doesn't seem to have any Constitutional or free/fair-market basis that I
can see.

Net neutrality shouldn't be about mandating equal performance, but equal

I'm curious what you think?


I haven’t seen this mentioned anywhere, but see screenshot below.  I was curious when PayCycle was founded, so I searched “paycycle founded”.  Google apparently saw enough similarity in the search results that rather than just giving me the links, it gave me *the answer*. Especially interesting because not all of the answers were right (eg, the second search result is clearly wrong).  Pretty amazing!

Founder and CEO of Expensify
Follow us at http://twitter.com/expensify

So Egypt has its internet back, but I still can't figure out precisely
what was gone when it was gone. Can you help? So far as I can determine:

– Cellular service was only shut off in regions (eg, at the sites of the
major protests) while left on elsewhere in the country.

– Landlines continued functioning everywhere.

– I've seen no sign that domestic internet was affected. For example,
it's possible that all homes and businesses still had live network
connections that simply weren't resolving DNS, or perhaps *were*
resolving DNS internally. It's even possible that local DNS caches were
resolving completely normally — even for international domains —
except there were no routes to the IP addresses to which they resolved.

– Indeed, even if all ISPs turned off all broadband everywhere, there
would still be large pockets of functioning LANs (universities, housing
complexes, hotels, etc).

– The consequences of total domestic internet blackout are very severe.
I don't see any sign that government and critical services run their
own network (though the military might), nor any sign that the internet
was selectively disabled for only individuals and businesses while
sparing hospitals, police stations, power plants, etc. Furthermore, I
haven't heard that any critical services lost domestic internet or
telephone access, even though I imagine that would be a very interesting
story if true.

I think understanding what actually happened is important such that we
can plan and act in a way that is optimized for the real world, rather
than a (potentially) unreasonable worst-case scenario that never
actually occurs. I'd love your help in fact-checking the above
assumptions by providing evidence (links) to the contrary.

Regardless, none of this really changes my core thesis, which is that
whatever solution built must:

– Somehow become very popular and widely deployed *before* the event,
requiring substantial "added value" even when the internet is accessible.

– Define "added value" in terms that the average person cares about,
which is *not* anonymity, security, privacy, etc. Rather, it needs to
be speed, convenience, reliability, and so on.

– Take an approach of "adding to" rather than "replacing" whatever
more-popular alternatives are already in place (twitter, aim,
bittorrent, skype, etc) so as to ensure users sacrifice nothing by using it.

– Take best, simultaneous advantage of whatever resources are available,
at all times. If there is BlueTooth, use it. If there is a functioning
LAN, use it. If there is a functioning sitewide/domestic/international
WAN, use it. And so on.

– Anticipate the imminent failure of any of these methods at any time by
proactively establishing fallbacks (eg, a DHT in case DNS fails, gossip
in case the DHT fails, sneakernet in case wireless fails, etc.).

– Require no change in user behavior when one or more methods fail. So
the interface used to tweet, fileshare, make a call, etc — all these
need to work the same for the user (to the greatest possible degree)
irrespective of what transport is used.

– Work on standard, unaltered consumer hardware (no custom firmware, mod
kits, jailbreaks, etc) with standard installation methods (app stores,
web, etc).

– Be incredibly easy for people to use who aren't tech-savvy. This
means spending 10x more time testing and refining the usability of the
system than actually developing sexy esoteric features.

I really do think this is a relatively easy thing to build (at least, in
a minimal form), using existing hardware and proven algorithms. I'd
suggest something like:

1) Start with an open-source twitter application. Google suggests this
one, though I haven't personally used it:

2) Add a central server, just to make it really easy for nodes to
communicate directly. (We'll replace this with a NAT-penetrating mesh
*after* the much more difficult task of getting this popular and widely

3) When you start up, connect to this central server. Furthermore,
whenever you see a tweet from anyone else using this client, "subscribe"
to that user via this central server. (Eventually you'd establish a
direct connection here, but we'll deal with that later.)

4) Every time you tweet, also post your tweet to this central server,
which rebroadcasts it in realtime to everyone subscribed to you. Voila:
we've just built an overlay on top of twitter, without the user even
knowing. All they will know is that tweets from other BuzzBird users
for some reason appear instantly. And the next time twitter goes down,
all tweets between buzzbird users will continue functioning as normal.

5) Then start layering features on top of this, focused on making a
twitter client that is legitimately the best — irrespective of the
secret overlay.

6) For example, add a photo tweeting service. Publicly it'll use
twitpic or instagram or whatever, so all all other users will see your
photos just fine. But buzzbird users will broadcast the photo via this
central server, faster and more reliably than the other services, as
well as locally cached for offline viewing. Repeat for video, files,
phone calls, etc.

7) At some point when you've established that people actually like and
use buzzbird with its very simple and fast central server, THEN start
thinking about P2P. (Seriously, do NOT think about P2P before then as
it'll only slow you down and ensure your project fails.)

8) To start, keep the central server and just use it as a rendezvous
service for NAT penetration. So still centrally managed, but with
direct P2P connections. Then when you come online, you immediately try
to establish NAT-penetrated direct connections with everyone you're
following. This of course immediately presents you with challenges: do
you need to connect to *everyone* you follow? If only some, which? Take
these problems one at a time knowing you can always fall back on the
central server until you perfect it. In other words, the goal is to
remove the central server, but you can take baby steps there by weening
yourself off of it.

9) Similarly, add BlueTooth, ad-hoc wifi, USB hard drive sync (aka
"sneakernet"), etc. These would all be presented to the users in terms
of real world benefit ("Keep chatting while on the airplane!" "Share
huge files with a USB flash drive!" and so on.), while simultaneously
refining the tools that they'll use during a partial or total internet

10) Eventually when you've figured out how to move all functions off the
central server — the nodes start up, establish direct connections to
some relevant subset of each other, build a DHT or mesh network, nodes
that can relay for nodes that can't, etc — the last function will be
"how does a newly-installed node find its first peer?" This is called
the "bootstrapping" problem, and is typically done with a central
server. But it needn't be done with *your* server. Just use twitter
for this: every time you start, re-watermark your profile image with
your latest IP address (or perhaps put it into your twitter signature
line, or location, or something). This way the moment you do your first
tweet, everybody who sees it will try to contact you (or some subset, so
as to not overload you). Then you can turn off your own central service
and just use twitters.

11) When the "dark days" come and twitter goes offline, your nodes won't
even notice. They'll continue to establish their DHT with whatever
subset of the network is interconnected, relay for each other, etc. If
the internet is totally gone, your users will use bluetooth, wifi,
sneakernets. They'll be ready because you *trained* them to survive on
their own *before* they needed to, rather than just handing them a knife
assuming they'll know how to use it when the time comes.

Really, the biggest challenge in all of this is whoever gets to step (6)
will immediately be acquired by Twitter for an enormous sum of money.
But hopefully that person will, with their new-found wealth, continue on
to (7) and beyond.

This is a doable thing. One person motivated person could do most if
not all of this. Is that person you?


Egypt appears to have cut all internet connectivity with the rest of the
world in an attempt to quell its use in organizing protests. The only
reason this makes any sense is if the tools used to organize the
protests (Twitter, Facebook, Gmail, etc) are hosted outside Egypt.

To this you might say "Let's just host protest-organizing tools on
servers inside protest-likely nations in anticipation of them using this
strategy again." But that won't work because odds are the government
would just seize all protest-organizing servers within their borders.

So the only protest-tools that will continue to work reliably are those
that continue to work without access to the outside world, without
relying on locally-hosted servers, and *without even relying on the
internet at all*. It's a tall order, but here's how I'd do it.

1) Recognize that this service needs to be used in the good days, such
that there is adequate distribution already in place when the bad days
happen. THIS IS THE HARDEST PART. I say this in all caps because this
is why no meaningful system like this exists today: the people most
likely to build it are too obsessed with esoteric technical problems
than solving the issues that actually matter in the real world.
Asymmetric, anonymized, mesh-distributed, onionskin-routed communication
doesn't mean anything if nobody uses it. So before even thinking about
the technology, we need to think how to make it relevant to users who
*aren't* protesting (yet).

2) At an absolute minimum, it needs to be no worse than then existing
alternatives. So if it's going to replicate Twitter, it needs to be at
*least* as good as Twitter, otherwise everybody will use the *real*
Twitter (until it's turned off by their local neighborhood dictator).
On way to be better than Twitter is to actually be better than Twitter.
Good luck with that. Another way is to just make your tool post to
Twitter. I think that's a much better idea: if this tool (let's call it
"anoninet" just for kicks) offers some Twitter-like functionality, it
should be completely compatible with the real Twitter in the
99.99999999999% of situations where the real Twitter is actually
available. Same goes for Facebook, Flickr, etc.

3) Ok, so anoninet's primary value in "good times" is starting to take
shape: it's a one-stop-shop to post to all your social networks. So you
install this thing, type in all your passwords (You could store them
locally in some encrypted keychain decrypted by a master password, but
that's the sort of technomasturbation thinking that obscures real-world
requirements; in reality just store it unencrypted because those who
don't care don't care, and those who do should really just encrypt their
whole hard drive), then you can post status updates, photos, videos, and
everything will automatically go to the right place. Indeed, before you
even think about making this into some sort of resilient
protest-enabling tool, you should make this the best possible
social-network posting tool. (Because if it's not that, then nobody
will have it installed when they want it most.) I'd suggest emphasizing
how this thing works even with unreliable internet, essentially letting
you queue up everything locally and it does background uploading as the
network becomes available. Similarly, it downloads everything locally
for offline reading. Odds are your protest-likely environment has
shitty internet to start, so this feature will likely have immediate
value. Add in really good support for USB-connected devices (cameras,
videocams), and basically present it as the single best way to do social
networking in a nation with shitty internet.

4) Step 4 is to succeed with step (3). Don't even think of anything
else until you've done that. Seriously, it's a waste of your time and a
disservice to your users. (3) needs to be totally nailed and immensely
popular before anything else matters. I'd say something like 10% of
your target population needs to be using it before you consider continuing.

5) Once you've got huge distribution of your client-side
social-network-optimizer, then you can start to raise the bar. Because
it's targeted to environments that have expensive and/or unreliable
internet, P2P starts to sound interesting. Throw in a network-localized
DHT and build out a distribution network that "rides" on these other
networks. So every time they post to Twitter, Facebook, Flickr,
YouTube, or whatever — they're also posting to anoninet. And when
another anoninet is reading your Twitter stream, somehow they detect
each other and rather than getting the data from Twitter (for example),
they get it directly via some localized P2P connection. Present this to
the user as faster, more reliable, and cheaper than getting it from the
*real* YouTube.

6) Quietly encrypt everything and tunnel over commonly-used ports.
Don't talk about this, just do it. Users don't care until they do, and
by then it's too late.

7) Ok, so at this point we have wide distribution of a very popular
social networking tool that uses a localized P2P mesh as an optimized
fallback to the major global tools. Its major advantage is it works
over networks that are slow, unreliable, or expensive. This'll save you
in the Egypt case; these users would continue using the tools they
already use, to talk to the people they already talk with, and
everything will continue functioning as normal. They won't be able to
talk with the rest of the world, but they *will* be able to talk amongst
themselves, which is the important thing. Furthermore, because it's all
P2P, there are no servers to seize, and because it's all encrypted over
common ports, it's indistinguishable from all other encrypted traffic.

8) However, if this had existed in Egypt, odds are Egypt would have just
shut down the internet, period. If a dictator is willing kill you, odds
are they wouldn't blink at turning off your email. So how to make this
work without internet? The answer is: make it incredibly easy to batch
and retransmit data like Fidonet back in the day. So when shit is
*really* going down, you whip out your favorite 4GB, 32GB, or 640GB USB
drive and just sync your local repository (remember how everything was
conveniently cached locally for fast offline access?) with the device.
Optimize it to sync the most popular content first, basically ensuring
that the most intersting/important message is also the most widely and
redundantly distributed.

9) Finally, this needs to spit out an installable copy of itself to
whatever removable media is available. This way when the shit starts to
*really* go down, as people realize the true value of this system it can
spread fast to the people who need it.

Voila. A tool that supports communication amongst protesters even in
the face of total internet blackout. Some other random thoughts:

– Ideally it'd piggyback on existing credentials. So when you install
this thing you don't need to think "I'm creating a new account".
Rather, you just install this thing, type in your Twitter username and
password, and whatever giant asymmetric keypair it creates internally is
just some nameless thing associated with that Twitter account. (And you
might have multiple.)

– This thing needs to broadcast itself via existing networks in a
totally transparent way, so if we're both users and I read your Twitter
stream, I should know you're also a user without you ever telling me.
The first way that comes to mind is this thing could watermark your
profile image with maybe a digital signature (or perhaps just jam it
into some sort of extra field in the image). Then when I follow you, my
client sees the watermark, reaches out to the DHT, sees that you're
signed in (or not), and establishes a NAT-tunneled P2P connection directly.

– Social networks are particularly good for this sort of architecture as
they map well to the "publish/subscribe" model. This works easily on a
P2P network (you register yourself with the DHT by name and
keyword/hashtag, and then when you post there everybody who is
"following" you or a particular hashtag gets your data), as well as
create an implicit "value" metric for use when synchronizing data in
"sneakernet mode" (publishers/hashtags with a high follower count are
assumed to be more valuable and thus beat out less-popular content).

– This sort of system actually isn't that useful to terrorists,
criminals, drug-dealers, and so on because it's designed for mass public
communication (not indvidual private communications). Granted, nothing
in this protects the individual from being targeted, but that's an
entirely different problem. (And I wager one that could be layered on
top of this in a straightforward manner.)

In all honesty, this isn't that hard a thing to build. One dude could
do it. I could personally do it, and know several others who could as
well. But I'm busy. Hopefully a better person than me with more time
on their hands will pick up on this and do what needs to be done. The
world will thank them for it, though its dictators won't.

My blog (including this post) is at http://quinthar.com
Follow me at http://twitter.com/quinthar

This one is from 2008.  I was asked something along the lines of “Well if you’re so so smart, how would you fix the music industry?”  Here’s my answer:


David’s Voluntary Payment Plan

David Barrett




This plan recommends creating “music registrars” to authoritatively manage song metadata in a fashion similar to how domain registrars authoritatively do the same for domain names. Artists (or their representatives) upload songs to registrars, who in turn check their waveform fingerprints against a master database of all known songs. If the song has already been registered by another owner, a conflict resolution process is started. Otherwise, the song is transcoded to a MP3 and tagged with a variety of metadata (artist and song name, artist website, etc), including “payment protocols” that enable fans to support the artist in a standardized way. iPods and other MP3 players are gradually outfitted with integrated support for various payment protocols, as well as methods for receiving artist communication or learning of and purchasing artist merchandise, concert tickets, and so forth.

I. Example of Operation

First, here’s a quick walkthrough of how the system would be used in common operation:

A. Adding a new song

Alice, an independent musician, selects from one of several music registrars, creates a free account, uploads her track in the FLAC format, assigns it a name, optionally organizes it in one or more albums, and is done. The entire operation is free, takes less than 10 minutes, and requires no personal information beyond an email address.

B. Downloading a song

Bob, a music aficionado, browses a variety of free music outlets for new songs. One of those locations has an active online community around indie music, and the forum is buzzing around a new musician, Alice. The forum links to a page where Alice’s music can be downloaded — he clicks the link, chooses the format and bitrate, and downloads the MP3 for free. Though the website allows low-quality 128Kbps versions of the song to be downloaded or streamed straight from the server, for cost reasons it only allows 256Kbps and FLAC versions to be downloaded via a P2P network. He’s all about quality, so he whips out his favorite P2P application and downloads the FLAC.

C. Listening to a song.

When the download completes, Bob copies the file several places — his laptop, his home stereo, his iPod, his phone — all of which support the completely standard, unprotected audio format.

D. Supporting Alice

Bob decides that he really likes Alice’s music and wants to see more of it get played. He has several ways to help that happen:

  • One way is to go back to the website where he downloaded the music in the first place. There there’s a small (but growing) forum where Alice fans discuss her music, links to other music by Alice, recommendations of other music by Alice, and so on. Furthermore, there’s a quick note by Alice herself saying “Hi, I’m trying to raise $1000 to fund my next album, please help me out!” Bob sees she’s up to $950 right now. He’s got a few options of how to help. One is to just do a simple cash contribution, one is to help raise up to $1000 (at $950 so far) with the caveat that if she doesn’t raise the full amount within a set timeframe, the money is given back. Another is a subscription of $1/mo that gets his name put on a list of True Fans. Yet another is to buy the last limited-edition autographed copy of Alice’s first Vinyl album for $50. All of these options can be paid with PayPal or a credit card.

  • Another way is to use a feature built into iTunes and his iPod to auto-support any any song he listens to more than 5 times, to the default (but adjustable) amount of $0.05/listen. Similarly, whenever he looks at the face of his iPod to remember who he’s listening to, he sees Alice’s message that she’s trying to raise $1000 and is up to $950. Likewise, he sees there’s one more copy of the limited edition vinyl available.

Ultimately, he decides to go for the vinyl recommended by his iPod. He goes to iTunes, chooses “open musician’s website”, and buys the vinyl online.

E. Getting paid

When Alice signed up, she had no idea her music would be such a hit. But her inbox is full of messages, donations, and all her vinyl copies (which she hasn’t even made yet) have already been sold.

  • Getting to work, she uploads the cover art design and asks her registrar to press the given number of vinyl records and FedEx to her for signing. When she sends them back, the company redistributes them to the customers who purchased them, and the money is deposited into her account.

  • As for how to get her money, she has a couple options. The classic approach is to just give her direct deposit information and it’s deposited via the ACH network (automated clearing house). Another is to give her PayPal information. She doesn’t like any of those options, so she goes with a third option of just having a reloadable prepaid Visa card sent her way — any money added to her account is instantly available for use at any merchant, or even to be withdrawn from any ATM.

II. Music Registrars

Core to this plan is the notion of “music registrars”. Like DNS registrars (from which this draws inspiration), there are many and all provide compatible functionality while competing aggressively on price and value-added services. Musicians are free at any time to sign up with any number of registrars, or move tracks between registrars at a later date. But each track ultimately maps back to a single registrar that manages (at least) standardized metadata operations around that track. In essence, a registrar provides at least the following:

  • Account creation. Generally with a username/password, though optionally with more secure mechanisms (multi-factor authentication, PKI, etc).

  • FLAC storage. For every track managed, permanently store a master FLAC version.

  • Metadata hosting. For a given track, host its authoritative name, artist, album, etc. (essentially, ID3 tags) in one or more languages.

Though not strictly required, in general a registrar will offer a wide variety of additional services, including some subset of:

  • Transcoding and hosting. Generates a variety of file formats from the master FLAC, including MP3, Flash, etc. and hosts them on the web and P2P networks.

  • Payment gateway. Accepts payments from fans according to a variety of payment protocols and securely deposits into the artist’s account.

  • Fan management. Forums, blogs, RSS feeds, and all the accouterments of web 2.0.

  • eCommerce. Anything ranging from a Yahoo Store-like checkout system to a CafePress-style product generation assistant.

  • Recommendation engines, playlist management, webcasting radio stations, promotion services, gig management, tour assistance, discount music equipment, etc. Basically, each registrar will attempt to provide artists with a complete one-stop-shop of all things they could possibly need to be a happy, successful musician.

A service exists that lets anybody look up the latest metadata on any track. (Typically you would just download the metadata straight from its registrar, but there would be a mechanism to determine who the registrar is — if any — for an unknown piece of music.) This service uses a combination of servers hosted by the registrars, as well as servers hosted by an independent organization that manages the registrars themselves. This organization is focused exclusively on the operation of enabling transfers of music between registrars, resolving disputes between registrars (and between users and registrars), and authoritatively stating which registrar is currently managing which track. This organization is funded through annual re-certification fees paid to the organization by registrars.

One operation that is particularly interesting is: how does this organization uniquely identify each track in order to guarantee that each is only being represented by a single registrar? The answer is by using waveform fingerprints. Each registrar holds onto the master FLAC for every song in its management. Upon adding a new song, it uploads a “fingerprint” of the song to the master organization, which then confirms no other song has the same signature. (If there is a conflict, the organization investigates and resolves it.) The organization will make the choice as to which signature function to use (and it needn’t be perfect, it’s just a tool in helping proactively identify and resolve conflicts), and it can at any point decide to use a new function by simply having all registrars re-fingerprint all FLACs with the new function. Again, the fingerprinting doesn’t need to be (and won’t be) perfect — it’s just a flag that triggers manual corrective action. The better the function, the less wasted work.

III. MP3, ID3, and Metadata

In general practice, a musician would upload a track’s master FLAC to her music registrar, and the registrar would generate a series of MP3s that have all the ID3 tags correctly set. The musician could then do whatever she liked with those MP3s — email them, post them to P2P networks, post them on forums, burn them to CDs, etc — and the ID3 tags would just be carried along with them.

However, the metadata can be indexed, distributed, and used in any way, even outside of MP3s — the same information can be downloaded from the registrar at any time.

IV. Music Metadata and Player Support

In general, the metadata associated with a particular song can be any arbitrary name/value pair that the owner sees fit to associate with the song. There are no strict requirements or limitations on what sort of metadata must be associated. Similarly, players can choose to support all, none, or any subset of the metadata contained within a file. Any metadata not understood should be simply ignored. Some types of metadata include:

  • The standard ID3 tags: The obvious metadata includes artist name, song name, album, genre, and everything else you typically see in MP3 players. Example:
    Name: Before Today
    Artist: Everything but the Girl
    Album: Walking Wounded
    Track: 1

  • Unique song GUID: A globally unique identifier assigned by the registrar to this song. A given song would have the same GUID across all bitrates and encodings, for example, but different mixes of this song would have different GUIDs. In general, all MP3s with the same GUID should have the same waveform fingerprint; similarly, in general, no two tracks with different GUIDs should have the same waveform fingerprint. This GUID can be used by the player, website, or other service for whatever purpose it likes (it’s handy to have a key by which to index the song). Example:
    GUID: s8d9fgfud6s6d6f8ds8sys6s65

  • Metadata URL: A new tag would be a HTTP URL from which the latest authoritative metadata can always be downloaded in some standard format (I’d propose JSON, others might argue XML, but the specific choice is TBD). Any player or service can download the latest metadata for this track at any time, possibly rewriting the MP3 itself with the new information. Example:
    MetadataURL: http://mytunes.com/meta/s8d9fgfud6s6d6f8ds8sys6s65

  • Payment protocols: A series of descriptions through which this artist can be automatically compensated according to some predetermined protocol. There will be many different payment protocols (and new ones all the time), some of which might include direct deposits into bank accounts, charging to phone bills, reverse charges to prepaid credit cards, PayPal transfers, eGold transfers, or whatever. It’s likely each registrar would offer one or more of the most well-known payment protocols by default, but there is no restriction on somebody coming out with a new payment protocol and then associating it with their song. (More details on this below.) Example:
    Payment: ach://<bankaccount>,<institution ID>
    Payment: paypal://<email address>
    Payment: raise://amount=$1000&current=$950&by=2008/4/1

  • Hash: Though there’s no strict requirement that a given song be distributed universally as a binary-identical MP3 for each given bitrate, it’s reasonable to assume that this convergence would occur. Thus a valid piece of metadata would be the hash of a given encoding, which can be used by the player to verify that the file hasn’t been corrupted. Example:
    Hash: MP3/256/SHA1(3da3f0afc0d772825c43e310fe34eacf0dea204b)

  • Message of the day: A general message that the artist wants to associate with this song. Can be anything from a simple hello, a description of the song, a request for help, an advertisement, or anything. This could appear on the face of an MP3 player, or in a bubble on your desktop, or however the player feels fit to show it. Example:
    MoTD: Only 1 copy left of my limited edition vinyl album, $50!
    MoTD: Don't forget, I'm playing the Fillmore tonight at 8pm!

  • Lyrics: The lyrics of the song itself could be easily included in the song, or perhaps a URL where the lyrics can be downloaded.

  • Other songs by this artist / recommended by this artist: Links to other songs by this artist. A player could be configured to poll this at some frequency to be automatically notified when new music by an artist becomes available.

The important thing to take away is that metadata can contain anything, and registrars merely record and host it — it might or might not have any awareness of what the various name/value pairs actually mean. You needn’t ask anybody’s permission or get the approval of any standards body to create new metadata: just add it to your song, and any player that doesn’t expect it will ignore it.

V. Artist Compensation via Player Integration

The basis of this system is to enable fans who want to compensate artists whenever and wherever the mood strikes them, in whatever amount, for whatever reason they come up with. This is enabled through integration with the players themselves, as this reduces the latency between hearing the song, making the decision to support the artist, and actually conducting the transaction.

The specific method of the integration is up to the designer of the player or service. But some examples that could be applied to any general MP3 player include a “thumb’s up button” where $0.50 is sent to the artist when pressed, or an “auto-tip” option where $0.05 is sent to the artist each time his song is played in entirety, etc. All of this would be opt-in and configurable by the user in regards to the amount being paid and the frequency of payment.

Similarly, metadata and players could generally conform to standard ways of advertising merchandise and concert tickets related to the music. Depending on the player’s form factor, it could even provide basic storefronts, one-click additions of tour dates to Google Calendars, or whatever type of interaction the device feels is appropriate to facilitate between artists and fans (perhaps even with a commission for the transaction paid to the device manufacturer). Ultimately, this is left up to the artists, fans, and player manufacturers to decide – the music registrar just manages the metadata without being aware of what it means or how it’s used.

As for how the payment would be technically conducted, this would depend on the payment protocol and would likely be decided by a period of competition ultimately leading to a few widely supported “de facto” standards. For example, a phone-integrated player might use a payment protocol that puts song contributions straight onto your phone bill. An iPod might keep an internal count of what payouts are left to be done, and then upload the transactions to an iTunes-integrated micropayment engine when synchronized. WinAMP might accumulate transactions until they exceed some threshold where paying the artist directly via PayPal makes sense. And so on. Payment providers will compete vigorously for adoption by players and registrars alike, but the ultimate decision for who to pay, how, and how much rests with the listener.

VI. Conclusion

In summary, the above proposal outlines a global framework where fans can voluntarily support fans through a competitive ecosystem of compatible service providers. The design separates functionality along clear layers of accountability and enables competition between multiple parties within the layers. The goal is to create a flexible, powerful system that enables a degree of innovation yet unseen in the music industry (at least, in the legal music industry). Much like the web and internet itself have transitioned from small, non-profit research projects into engines of global commerce, music — both its creation and consumption — has the capability to be a similarly innovative and powerful force. It just needs a framework that encourages it.


Here are the questions I’ve heard asked on this list before, and some quick answers to each:

  1. What if nobody decides to pay?
    The base assumption of the entire music industry is that music is valuable, and that fans actually do exist. If fans — people who value art and wish to support their artists — do not in fact exist, then this system won’t create them.

  2. What if no music players decide to support payment options?
    The system works best if the payment protocols are implemented in the players themselves. In the meantime, until these are widespread, music registrars can offer web-based gateways that help fans support artists using today’s technology.

  3. What’s to prevent me from uploading the Beatles as my own mine?
    The standard solution to this problem is to have a “sunrise period” where prominent trademark and copyright owners are given early access to submit their own songs to the database. The expectation is each of the labels would run its own “private” registrar to manage its songs, and thus they would simply upload a complete list of fingerprints for all their songs to the registrar-management agency. In the event anybody uploads one of the label’s songs to a different registrar, a flag would be raised when the fingerprint conflicts with the existing database, and would be resolved through manual action.

  4. So… where’s the big pool of money? Where’s the sampling?
    That’s right, this system doesn’t need to globally sample listening demographics in order to disperse a central pool of money according to some arbitrary measure of value. Rather, the money is never pooled — it goes straight from the fan to the artists (via one of many competing payment gateways). The samples are never taken — it’s not really practical in the first place, and it’s just not needed. And no arbitrary measure of value is selected — it’s left up to every fan to decide how much to give his artists.

  5. What about piracy?
    What about it? It already happens today in vast amounts, and no plan on the books even claims to have a chance of doing anything about it. Piracy *is* online music — everything else is just an aberration. This plan seeks to capitalize on the real world as it exists today, tapping into the vast sums of money that fans currently aren’t giving to music labels.

  6. What about privacy?
    This system gives exceptional privacy protections to all involved because there is no one entity that sees all activity. As such, it doesn’t centrally aggregate sampling data, demographic profiling, historical traffic, personally identifiable information, or any of the problems that people are generally skittish about. The centermost entity of this plan is an organization that just has anonymous fingerprints of unnamed songs, and knows absolutely nothing about the songs themselves, the artists who make them, the users who listen to them, or the interactions in between.

  1. X got paid $Y before, will he still be?
    Possibly. Maybe he’ll get paid more. Or maybe less. The same can be said about every other solution on the table.

  1. But it’s not fair! How will X get paid for Y?
    This plan recognizes that every fan has a different idea of what is or is not fair, and fully empowers him to act upon that notion. Even the old system that is rapidly dying wasn’t “fair”, it’s merely “what was”. This plan does not attempt to blindly copy what was, nor invent some new notion of “fair” and mandate that all fans obey it under threat of force. So in this sense, it is arguably the most fair of all.

  2. Hasn’t this been tried before?
    Everything’s been tried before, and everything has failed – all plans have failed – due to lack of support and outright opposition by “old guard” music industry. Virtually every innovative plan, both voluntary and compulsory, has been crippled through lawsuit, squeezed through impossible pricing, or bypassed through refusal to participate. There’s very little in this plan that’s new, and without action by the existing industry, this plan to create a feasible commercial alternative to raw, uncompensated piracy will fail just like all the others have and are failing. But this proposal isn’t intended as a panacea. It’s intended as a review of what’s possible should the music industry decide to begin acting reasonably and in the interests of artists, fans, and society at large. There are signs that the industry is starting to have reason forced upon it by investors, artists, and even a gradual awakening of common sense after a decade of complete destruction of shareholder value. One day, they will either become irrelevant or will sign up to one of the many, many plans proposed and nurtured over the years. Maybe they’ll choose this one. Maybe not. The point of you reading this is to be aware that the vision presented herein is in fact possible, and to either encourage the industry to adopt this proposal, or to encourage congress to strip the industry of its abused and overzealous tools of copyright enforcement such that we can continue on without them. How many more decades are we willing to wait?

  1. So that’s all well and good, but seriously… Where’s the sampling?
    Seriously, it’s not needed. Take it in reverse.

Q: Why sample?

A: Well, we know how to at least try to sample music fingerprints transferred over the backbone, and we think that samples are somehow related to how often songs are listened to, so by sampling we can get a sense of which songs are most often listened to.

Q: Why do we care how often songs are listened to?

A: Well, we’re assuming that the number of times a song is listened to is representative of how valuable it is to fans.

Q: Why do we care if a song is valuable to fans?

A: Because artists must be paid in proportion to value, obviously!

Q: Paid by whom?

A: Well… by fans, I guess… obviously.

Q: Why don’t fans pay artists directly?

A: Well they *were*, through CD sales, until piracy ruined everything.

Q: I thought CD sales largely didn’t go to artists.

A: Well… if you want to get *technical*, no, but they sorta “trickled down to artists”… It’s complicated.

Q: Ok, again, why don’t they pay artists directly?

A: Because that’s impossible! What, are they supposed to track down every artist in their playlist and give them a nickel each time they play the song?

Q: Sure, why not?

A: Because… because you just can’t. It’s complicated. Fans can’t be trusted to support their artists directly. They need help.

Q: Help from whom?

A: Well, help from me, of course. And my friends. Only we can get artists compensated.

Q: But I thought your CD sales largely didn’t go to artists?

A: Yes they do! They trickle!

Q: So let me get this straight: the goal is to help artists get paid by fans in proportion to how much fans like them. But fans can’t be trusted to do it directly, and instead artists need the help of organizations that historically take the lion’s share of the profit and leave a trickle for the artists themselves? And the best way to do this is to force everyone to pay you a bunch of money that you distribute based on relative estimated value to fans calculated by sampling backbone traffic for a small set of music fingerprints, extrapolating global traffic, inferring total music listens from that, and then converting that sampled/extrapolated/inferred number into “value to fans” with an arbitrary formula selected by… by whom again?

A: By me.

Q: Got it.

A: That’s right! Now you’re getting it.

Q: And why not just let fans give artists money directly?

A: You just… you just can’t! And… it’s different, and therefore scary. Artists talking to fans? Fans talking to artists? What an absurd thought. Fans can’t be trusted! Artists don’t want to talk to fans! There need to be a middleman. Lots and lots of middlemen. And formulas! And sampling! And most importantly — a huge, enormous pool of money. That I control. Trust the trickle. It worked for your grandpa. Why can’t it work for you?

How's this for a blast from the past. I posted this to the iGlance
Yahoo group. You can read the original here, which has a couple
follow-up replies:


But here's the text itself for your reading pleasure:
Hi, thanks for writing. NAT penetration is a very tricky subject, so
let me first give an overview of what the obstacles are, and then I'll
explain my approach for circumventing them.

(Note, the 'STUN' protocol I'm using is home-brewed — it's not not
truly compliant with RFC3489, for reasons I can get into if you care to
hear. However, it accomplishes the same thing.)

First, assume the following network:

+——–+ +—–+ +——–+
| Client | —> | NAT | —> | Server |
+——–+ +—–+ +——–+

The client is connected to the NAT, and the NAT is connected (via the
internet) to the server. The client is generally on some LAN, and thus
has a "private" IP address. However, the NAT is generally on the
internet, and thus has a "public" internet IP address. Thus while the
client cannot send packets directly to the server (because the client
isn't on the internet), the client can send it "through" the NAT.

Now, UDP packets indicate from which address they originated. But which
address does the packet appear to be from when the server receives it:
the client, or the NAT? The answer is the NAT — NAT stands for
"Network Address Translator" because it translates "private" addresses
(such as on a LAN) to "public" addresses (such as on the internet).

So the client sends a packet from the LAN address (call it privateIP)
but the server thinks it's coming from an internet address (call it
publicIP) due to the NAT's translation. So long as the client is
simply sending to the server, there's no problem — if the
server is only receiving, it doesn't care what address the packet comes
from. But the moment the server wants to reply, then things get tricky.

In the easy case when a server is replying to a client request, the
server just sends back to the address the request packet appeared to
come from (ie, the publicIP). And when the NAT receives it, it forwards
it back to the client. In this way, when a client establishes a
connection with a server, the client and server can talk back and forth
without trouble.

However, the reverse is not so easy. Now, when the client initiates a
connection with the server, it 'punches a hole' through the NAT. This
hole (also called a 'mapping') is what the server uses to talk back with
the client. However, if the client doesn't punch the hole to the server
first, the server can't contact the client. Indeed, if the server sends
a packet to 'publicIP' before the client punches the hole through the
NAT, the NAT will just silently disregard the message and it'll never

Thus a NAT is a bit like a one-way mirror: a client behind a NAT can
contact servers without restriction, but servers can't do the same.
Many people like this behavior for security reasons. But obviously, in
a P2P network this is less desirable because if you're behind a NAT, a
remote client can't contact you until you contact it. But if it's also
behind a NAT, you can't contact it until it contacts you. A seemingly
intractable problem.

To solve this problem, iGlance uses a directory server that acts as an
intermediary to help clients behind NATs and firewalls connect directly.
The process works as follows:

1) Client A connects to the global server and registers its IP
2) Client B connects to the global server and asks for the IP for A
3) The server informs A that B is trying to contact it
4) Client A begins trying to contact B
5) Client B begins trying to contact A
6) Eventually a direct connection is established

As mentioned before, whether A tries to contact B or B tries to contact
A, both will fail independently. But when they both try to contact each
other simultaneously, they both "punch holes" through their NATs and
firewalls, and thus both let the other's communications through. This
technique of simultaneous hole punching is the essence of NAT-to-NAT

However, recall that each client typically only knows its "private" IP
address — ie, the IP address on its private LAN. But just as the
server sees only a client's "public" IP address, so do peers only see
other peers' public IPs. Thus before client A can attempt to contact
client B, A needs to learn B's public IP.

This process of a client determining whether or not it is behind a NAT
(and if so, finding its public IP address) is called the 'STUN' process
— named after the IETF standard RFC3489. (iGlance doesn't use this
protocol, but is heavily influenced by it.) The precise technique
iGlance uses is as follows:

1) STUN server is assigned 3 IP addresses — STUN0-2

2) Client sends STUN request to STUN0

3) Client punches hole to STUN1

4) The STUN server attempts to contact the client *from* STUN0-2

Thus the STUN server sends *three* responses from *three* different
IP:port combinations, to the *same* IP:port from which the client
request originated. Depending on the NAT and firewall in place, the
client might successfully receive up to 3 responses, one each from a
different IP:port on the STUN server. Based on which requests succeed,
we can guess which type of NAT is between the client and the STUN
server. This is used to set the 'Connection_Class' as follows:

FIREWALL: (0 responses)
Something is blocking either all outbound or inbound UDP traffic.

SYMMETRIC: (1 response from STUN0)
The client can receive UDP only from the exact IP it sends to.

RESTRICTED: (2 responses, from STUN0 and STUN1)
The client can receive UDP only from remote IP:ports for which holes
have explicitly been punched.

UNRESTRICTED: (3 responses)
Once a hole is punched through the NAT, any remote IP:port can use it to
contact the client.

PUBLIC: (3 responses)
The client is not behind a NAT and thus can receive from any IP:port.

Furthermore, the server returns in the STUN response the apparent
IP:port from which the client's request appeared to originate. Recall,
the client sends from its 'private' address, while the server receives
from the client's 'public' address. If these are different, we know a
NAT must be in place. But if they are the same, then we can assume
there is no NAT in place and thus the client is connected to the
internet directly. (This is how iGlance distinguishes between the

(All this logic is contained in the file GDispatchService.cpp. The STUN
request is sent in the function GDispatchService::_requestStun( ), and
the responses are processed by GDispatchService::_onInput( ) in the
GDSS_STUN state.)

So clients with PUBLIC, UNRESTRICTED, or RESTRICTED NATs know they can
receive UDP directly from another peer. And clients behind SYMMETRIC
NATs or UDP-blocking FIREWALL know they can't (they must establish a
'TURN' connection with the server, which simply listens for UDP traffic
and sends back over HTTP). Armed with this information, clients can
ensure they are able to be contacted by remote peers, whether behind a
NAT or FIREWALL, or directly on the internet.

Does this answer your question?


Anybody know anything about this? Care to take any guesses?


TPB has never really been a coding organization, so my bet is it's not
on some amazing new P2P service, but rather just a retooled version of
TPB website that is optimized for music content. In other words, the
basic foundation will still be a standard Torrent client.

I'm still amazed nobody has built a really good pirate music outfit — a
*true* Pandora whose box when opened can't ever be closed. Music as a
product only has two real features:

1) Play this song (or list of songs) right now
– Search MusicBrainz
– Find the most popular album containing the song
– Search TPB for the highest seeded version of that album
– Download it with libtorrent
– Fish out the song you actually wanted
– Play
2) Play songs around this theme until I tell you to stop
– Given an artist name
– Look for similar artists on MusicBrainz
– Assemble a big playlist
– Download albums one at a time like (1)
– Play a random mix of whatever subset is available
– Keep expanding that subset

It's the simplest possible product to conceive. All the hard problems
have already been solved: the content is readily available, the metadata
is already there. All the pieces are in place and are just waiting for
someone to assemble it into a user-friendly package. The only "work"
involved is:

1) Build a UI with three input elements:
. A search box
. A "Play exact" button
. A "Play like" button

2) When "Play exact" is pressed it goes out, downloads, and plays that
exact song, artist, album, etc. Furthermore, if it's already downloaded
it just plays from its cache.

3) When "Play like" is pressed, it instead goes and finds a range of
songs/artists/albums like it, and plays those instead.

The only challenge is dealing with mapping the fuzzy input from the user
into MusicBrainz, and then mapping its output into ThePirateBay, and
then figuring out which song downloaded is the one you want. But again,
that's a solved problem. I don't personally know the best solution, but
if you convert everything into soundex sequences and just match based on
how many common homophones it has, I bet you'd get pretty close.

Anybody on this list could build it. Seriously, it is a one-person job,
and there are probably dozens of people on the list with the time,
energy, and inclination to do it. Why study some esoteric P2P mesh
problem that odds are won't ever matter, when in the same (or less) time
you could build a world-shaking music service, single-handedly? You
could be *the guy* to take down the music industry.

Especially if you're in a non-US jurisdiction, this seems a no brainer.

Anyway, maybe ThePirateBay will do this now, but I doubt it. I expect
we'll need to wait for some nameless individual on the other side of the
world to step up. It really, truly, only takes one person to change the
world, forever.