A Dangerous Idea: Continuous Metadata Sousveillance

Posted in Ranting on August 20, 2013 by Emlyn

watchthewatchers9I’ve been thinking about Zimmerman’s Law:
“The human population may not be doubling every eighteen months, but the ability of computers to track us doubles every eighteen months.” “The natural flow of technology tends to move in the direction of making surveillance easier.”

He seems to think legislators need to “do something”. But I think you need to work with Moore’s Law, or be crushed by it. Legislators aren’t going to help; they’re actually who you’re trying to defend yourself from!

My instincts are that eventually, computers will be so powerful, networks so capacious, that basic data will be completely impossible to keep under wraps. In that scenario, anyone trying to hide anything will be detectable with a bit of signal processing. Secret organisations will be plainly visible via the negative space they leave in the general data exhaust.

Unfortunately we’re not there. My guess is that we’re about 20 years away from something like that. In the meantime, there’s this massive disparity between what institutions have access to in terms of data and what we have access to.

That disparity is potentially quite dangerous, particularly if it’s completely asymmetrical, as it threatens to be. If it were even a little more symmetrical, I believe that large, secretive institutions would have far more to worry about than regular people. After all, if a bit of your personal information leaks onto the ‘net, it’s just about always going to be harmless. If a bit of the NSA’s private information leaks, all hell breaks loose and they’re suddenly in existential peril.

What we’ve discovered recently is that the content of communications isn’t all that important. It’s the metadata that let’s you see the general shape of things, the big picture. That’s why the secure email services are shutting down.

You’d think that we’d be able to use metadata in the reverse direction; see into the three letter agencies by analyzing the big data, seeing them in their exhaust, and in the negative space. But we don’t have access to datasets from cell phones, from cloud providers, from interaction with government agencies. We don’t have enough ability to touch the big data.

But we could make our own big data. That’s where Sousveillance comes in.

Sousveillance is “watching from below”, the counterpoint to Surveillance. Up till now, everything I’ve seen people say about Sousveillance has been around Video. But video sousveillance has a lot of problems: video is large (hard to upload in volume), unwieldy (hard to extract information), and recording video is still difficult to do continuously and inconspicuously.

But we’ve just learned that “metadata” is actually what’s useful. It’s not the content of an event, but the time, date, participants, location, devices involved. All that stuff is what you actually want to extract interesting big data signal.

All of us now carry devices capable of metadata sousveillance, right now. They’re our mobile phones, tablets, laptops, and soon to be watches, glass, and other wearables.

On these devices, you can monitor all of your own communications. But you have quite an array of sensors. One of the most interesting and most often forgotten is your network hardware. Your network hardware alone is aware of devices around it. Mac addresses of wifi devices, bluetooth devices.  Services advertising themselves on local networks. NFC devices and tags you interact with. Cell towers and related metadata. And etc.

Take that kind of data, stamp it with location, identity and timestamps, and push it online. All the time.

With the right app or apps, your devices could voluntarily upload streams of metadata to public repositories on the net. Users should be aware and voluntarily participating, but needn’t actually be technical. Just install and go.

With enough people installing such software, the repository we’d get would grow stupendously. And you’d start to see things. Maps of devices inside buildings being picked up by people walking down the street. Clusters of otherwise unrelated mobile devices turning up together in the same places at the same time. Protestors might start mapping devices used by the police, turning up from one place to another. And what else? I’m not sure, but I’m pretty sure there’d be amazing secrets to be uncovered.

Early on it’d likely be fairly dangerous to be involved, because you’d be pretty exposed. You’d be posting your own information freely online, after all. But if the idea spread, it’d start to be safer and more powerful I think.

It’d start forcing secretive institutions to try to obfuscate themselves, or else stop using open protocols. Both paths would really damage those institutions, making them less able to operate in the modern world.

There’d be some pretty amazing technical challenges. Where does this data go? How do you handle, store this massive stream of stuff?

But I think it’s probably doable. And it’s probably necessary, if we’re to push back against Zimmerman’s Law.

How Copyright Makes Books and Music Disappear

Posted in Ranting on July 7, 2013 by Emlyn

An interesting paper by Paul J. Heald. Have a look at the graph; a copyright regime is like burning all the libraries.

2317 New Books from Amazon by Decade

That’s a graph of new books currently on sale now on Amazon, grouped by the decade they are published. Why do new books rapidly drop off starting from those first published in the 1920s? Wikipedia says: “All copyrightable works published in the United States before 1923 are in the public domain.”

Abstract: “A random sample of new books for sale on Amazon.com shows three times more books initially published in the 1850’s are for sale than new books from the 1950’s. Why? This paper presents new data on how copyright seems to make works disappear. First, a random sample of 2300 new books for sale on Amazon.com is analyzed along with a random sample of 2000 songs available on new DVD’s. Copyright status correlates highly with absence from the Amazon shelf. Together with publishing business models, copyright law seems to stifle distribution and access. On page 15, a newly updated version of a now well-known chart tells this story most vividly. Second, the availability on YouTube of songs that reached number one on the U.S., French, and Brazilian pop charts from 1930-60 is analyzed in terms of the identity of the uploader, type of upload, number of views, date of upload, and monetization status. An analysis of the data demonstrates that the DMCA safe harbor system as applied to YouTube helps maintain some level of access to old songs by allowing those possessing copies (primarily infringers) to communicate relatively costlessly with copyright owners to satisfy the market of potential listeners.”

I’ve copied this here so it’s more social-network friendly.

Why are there Wizards?

Posted in Ranting on June 15, 2013 by Emlyn

When I was a young lad, in my first programming job (not even out of uni), an older woman who worked in accounts told me that programming had no future. Apparently her TAFE lecturers had been insistent on this point; programming was being brought into the realm where anyone could do it. Wizards no longer needed.

That was the early nineties, and it made an impression on me, because I felt it was deeply, profoundly wrong. I felt that idea was based in a mistaken view of why technology changes over time with respect to human society (ie: the dynamics of the technium), and what the true role of technologists (especially software people) is.

The Expanding Space of the Possible

Ok, stay with me here…

The Expanding Space Of The Possible

Many thanks to Sir Jony Ive for making this diagram

When you stand back and look at human endeavour, there are things we can do and things we can’t. The effects of human imagination, competition, and general discontent lead us to be very aware of the boundary between the possible and the impossible.

In the diagram above, points in the space represent logically consistent things we can imagine doing. Those points don’t move, but the boundaries in the diagram (what is possible and what is not) do.

The space of the possible expands over time, certainly in recent history this has been very hard to miss. I suspect there is a fairly tight relationship between population density (and so loosely with population) and the size of the possible, and that it can shrink when population density drops . Think of the space of the possible as what we can do using technology; this is totally dependent on the “level” of our technology. Jared Diamond writes about how technological level varies with population in his books. But in any case, at the modern global scale, expansion is a given.

The Automatium and The Laborium

I’ve split the space of the possible into two regions.

The inner, magenta region, is the Automatium. This represents all the things that we understand so well, have mastered in such depth, that they are fully automated. People involved in a relevant domain of endeavour can access the Automatium trivially and with little thought. In the consumer domain, we can go to the shop and buy an incredible variety of food, get whitegoods that keep things cold, wash things, cook things, communicate with people all over the globe, increasingly access knowledge about anything, and none of it requires much skill or understanding. Social networks such as Facebook, Twitter, and G+, recently brought the job of communicating with sophisticated and intricately constructed networks of dispersed others directly into the Automatium.

The outer, cyan region, is the Laborium. This represents all the things that we can do but that are not automated. They require labour, effort. Often they require skilled practitioners of one profession or another, and teams of people, and capital. Pretty much all paid work is in the Laborium (because it’s the place where money moves). Anything that you would build a service business around is in the Laborium. Using a social network might be in the Automatium, but building a social networking platform is in the Laborium (and on the outer edges, at that).

The outer edge of the Automatium is like a hard floor (the Automation Floor) below which we wont go, while the outer edge of the Laborium is like a flexible membrane.

Everyone in the Laborium is either standing on the hard floor provided by the outer edge of the Automatium, or standing on someone else’s shoulders. So the size of the laborium is defined by some combination of the sheer amount of people involved, and the complexity of organisation possible. The latter is the maintainable height of people standing on each other’s shoulders.

So why does the whole thing move? The fundamental mechanism is that we keep building more floor beneath us. Things enter the space of the possible at the outer edge, where massive capital, huge collections of people, large chunks of time are required. Our competition with each other, and maybe just our drive to improve, makes some of us try to make these things simpler, cheaper, quicker. So things are moved from the outer edge of the Laborium toward the inner edge (shifting not the point in possibility space, but the Laborium with respect to it). The laborium is like a churning froth, but it also behaves like a ratchet; once something moves lower, it wont move higher again.

Innevitably possibilities reach the outer edge of the Automatium, and are laid down as another hard layer of automation floor. People step up onto that. The shift ripples upward, and the outer membrane of the Laborium stretches to encompass new, previously impossible things. The space of the possible grows.

Technological Unemployment

The traditional story of technological unemployment goes as follows:

Technological unemployment is unemployment primarily caused by technological change. Given that technological change generally increases productivity, it is accepted that technological progress, although it might disrupt the careers of individuals and the health of particular firms, produces opportunities for the creation of new, unrelated jobs.”

In terms of this post, this traditional view is that people and firms work at a fixed point in space. As the automation floor moves past them (and people really don’t see it coming), they fall out of being able to do paid work. But the people involved eventually retrain/retarget/move on, often to something else very much closer to the outer membrane of the laborium, and they’re back in the game. If anything, the traditional situation has the laborium understaffed a lot of the time; we could reach further but we just don’t have the manpower.

Workpocalypse

However, there’s an emerging view that perhaps the something has changed recently. Because of modern automation, jobs are being destroyed faster than they are being created. That is, the Automatium is expanding faster than the Laborium.

Particularly, a divergence between productivity and job growth has emerged.

Erik Brynjolfsson of MIT thinks jobs are disappearing for good. This excellent piece in the MIT Technology Review reports:

“Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States. For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.”

How does this fit into our picture? What seems to be happening is that the Laborium is shrinking.

The Laborium is all about people. People have all kinds of skills and talents and differences, but we all spread out on fairly contrained continua, especially if compared to automation.

It used to be that specific things were automated away, but now entire classes of things are being hit. This means the automation floor is expanding faster than the Laborium’s outer membrane, shrinking the Laborium overall.

The space remaining is biased toward certain kinds of work. What kind of work is it most biased toward? Work that further accelerates expansion of the Automatium and further shrinks the Laborium.

The Rise of Wizards

People who create technology are people who automate things. You can automate with all kinds of technology, but the most effective technological space for doing this is the space of software.

Software is a unique technology. It’s the most flexible general technology we’ve ever found for taking the imagined and making it real in the quickest, most malleable, and potentially most complex and sophisticated fashion.

The wizards, ie: the people who create software, are the most efficient group at moving the boundaries of the possible. Wizards move the automation floor and move the Laborium membrane, and at global scale the collection of such effort has these boundaries moving ever faster.

Other types of work tend to involve staying relatively static within problem space. But wizards, by nature of what we do, are continually on the move; changing technologies, paradigms, environments, everything. Or if we don’t, then we don’t get to stay being wizards.

True wizards tend to abhor manual repetition. The idea of someone working away in a fixed section of the laborium, with no plan for eventually automating away that toil, inspires revulsion.

Business loves wizards, because wizards hold out the promise of a true edge in a competitive environment.

In a static environment, everyone has access to the same technologies, talents, ideas. The kinds of things that give one organisation an advantage over another are size, being entrenched, having connections. This all leads to a static environment without much room for change, or for new players.

But the point of wizards is to raise the business closer to the outer membrane of the Laborium; that keeps the business more competitive (the air is more rarefied there!), keeps it away from the doom of the automation floor, and allows a smaller business to outwit a larger one that is not so far out. Often this requires raising the automation floor in the businesses’ niche, related areas, or sometimes across some orthogonal line when the technology is abstract. Hopefully it involves pushing the membrane further out, and temporarily occupying space that no one else has reached yet.

Why can’t everyone be a Wizard?

When someone tells you that now anyone can do what a wizard does (eg: now I haz visual basic), you know the technologies involved are falling through the automation floor. That’s not where wizards hang out.

Wizards live in tall towers, built high above that floor. As they sink toward the floor, they build new, taller ones.

Up high, near the outer Laborium membrane, is a hostile place. Nothing is easy. Things are possible but very difficult. Ideas haven’t fully coalesced, standards haven’t developed, best practices haven’t developed or are wrong. Compare contructing a web app using the LAMP stack (down in the lower floors of the wizard tower),  to building a massive distributed application on something like Heroku or Google AppEngine. Compare building a standard AJAX based Web 2.0 site to a sophisticated mobile app (or set of apps to reach cross platform), or a mobile friendly web app with offline functionality. The newer things are more powerful, but much harder to do. There’s less received wisdom, more primitive tooling, and previously developed instincts tend to be wrong. But the opportunity is much greater.

It seems to take a unique mindset to really be a Wizard. You have to be comfortable with constant change. Increasingly you need to feel good about not thoroughly understanding your technologies, never being comfortable with the technology stack you’re using this week, never really attaining mastery at particular concrete skills. Clearly not everyone can do this, it’s why people try to develop simplified, non-wizard friendly versions of programming technologies.

All you can know for sure is that if the tech you are using is starting to feel solid, understood, well developed, then you’re close to the automation floor and need to get moving again.

The Ironic Nature of Wizards

The supreme irony for wizards is that we’ll be the last ones in the Laborium, after everyone has given up on that kind of toil.

Step by step, all other work will be automated away. Every other area will require less and less people, as the automation floor expands ever more quickly, and whole industries will continue to be sucked down below it, being replaced by organisations working at increasing levels of abstraction, relying on smarter and smarter tech and ever fewer people.

Meanwhile Wizards keep moving the boundaries, always running toward the outer edge.

The laborium will get thinner and thinner, as technology catches up to and surpasses human ability, unevenly but inexorably. Fewer and fewer people will be in it, and it will come to be dominated by Wizards.

As a great example, an article by a silicon valley web developer marvelling at being paid top dollar for seemingly meaningless work (it’s just abstract), while his non-wizard compatriots are increasingly left in the cold: http://www.aeonmagazine.com/living-together/james-somers-web-developer-money/

So by this logic, we Wizards will be the last ones working. The last of us will turn off the lights before we leave.

Meanwhile, the most recent news about the lady from accounts was that she’d been retrenched and was having trouble finding more work.

Man Of Constant Sorrow

Posted in Original Music on February 17, 2013 by Emlyn

There’s something unusual about this arrangement of ours. Can you pick it?

Jodie and I are recording a bunch of songs “live” in front of an audience next week. If you’re in Adelaide, you’re welcome to come along: next up | emlynandjodieoregan

This is a rehearsal from earlier this afternoon. Our house was infested with teenagers, so we’ve eschewed the kitchen table in favour of Jodie’s singing studio, “The Singing Garden”.

gaedocstore: JSON Document Database Layer for ndb

Posted in Tech the Tech on January 6, 2013 by Emlyn

In my professional life I’m working on a server side appengine based system whose next iteration needs to be really good at dealing with schema-less data; JSON objects, in practical terms. To that end I’ve thrown together a simple document database layer to sit on top of appengine’s ndb, in python.

Here’s the github repo: https://github.com/emlynoregan/gaedocstore

And here’s the doco as it currently exists in the repo, it should explain what I’m up to.

This library will no doubt change as begins to be used in earnest.

gaedocstore

gaedocstore is MIT licensed http://opensource.org/licenses/MIT

gaedocstore is a lightweight document database implementation that sits on top of ndb in google appengine.

Introduction

If you are using appengine for your platform, but you need to store arbitrary (data defined) entities, rather than pre-defined schema based entities, then gaedocstore can help.

gaedocstore takes arbitrary JSON object structures, and stores them to a single ndb datastore object called GDSDocument.

In ndb, JSON can simply be stored in a JSON property. Unfortunately that is a blob, and so unindexed. This library stores the bulk of the document in first class expando properties, which are indexed, and only resorts to JSON blobs where it can’t be helped (and where you are unlikely to want to search anyway).

gaedocstore also provides a method for denormalised linking of objects; that is, inserting one document into another based on a reference key, and keeping the inserted, denormalised copy up to date as the source document changes. Amongst other uses, this allows you to provide performant REST apis in which objects are decorated with related information, without the penalty of secondary lookups.

Simple Put

When JSON is stored to the document store, it is converted to a GDSDocument object (an Expando model subclass) as follows:

  • Say we are storing an object called Input.

  • Input must be a dictionary.

  • Input must include a key at minimum. If no key is provided, the put is rejected.

    • If the key already exists for a GDSDocument, then that object is updated using the new JSON.
    • With an update, you can indicate “Replace” or “Update” (default is Replace). Replace entirely replaces the existing entity. “Update” merges the entity with the existing stored entity, preferentially including information from the new JSON.
    • If the key doesn’t already exist, then a new GDSDocument is created for that key.
  • The top level dict is mapped to the GDSDocument (which is an expando).

  • The GDSDocument property structure is built recursively to match the JSON object structure.

    • Simple values become simple property values
    • Arrays of simple values become a repeated GenericProperty. ie: you can search on the contents.
    • Arrays which include dicts or arrays become JSON in a GDSJson object, which just hold “json”, a JsonProperty (nothing inside is indexed, or searchable)
    • Dictionaries become another GDSDocument
    • So nested dictionary fields are fully indexed and searchable, including where their values are lists of simple types, but anything inside a complex array is not.

eg:

ldictPerson = {
    "key": "897654",
    "type": "Person",
    "name": "Fred",
    "address": 
    {
        "addr1": "1 thing st",
        "city": "stuffville",
        "zipcode": 54321,
        "tags": ['some', 'tags']
    }
}

lperson = GDSDocument.ConstructFromDict(ldictPerson)
lperson.put()    

This will create a new person. If a GDSDocument with key “897654″ already existed then this will overwrite it. If you’d like to instead merge over the top of an existing GDSDocument, you can use aReplace = False, eg:

    lperson = GDSDocument.ConstructFromDict(lperson, aReplace = False)

Simple Get

All GDSDocument objects have a top level key. Normal ndb.get is used to get objects by their key.

Querying

Normal ndb querying can be used on the GDSDocument entities. It is recommended that different types of data (eg Person, Address) are denoted using a top level attribute “type”. This is only a recommended convention however, and is in no way required.

You can query on properties in the GDSDocument, ie: properties from the original JSON.

Querying based on properties in nested dictionaries is fully supported.

eg: Say I store the following JSON:

{
    "key": "897654",
    "type": "Person",
    "name": "Fred",
    "address": 
    {
        "key": "1234567",
        "type": "Address",
        "addr1": "1 thing st",
        "city": "stuffville",
        "zipcode": 54321
    }
}

A query that would return potentially multiple objects including this one is:

GDSDocument.gql("WHERE address.zipcode = 54321").fetch()

or

s = GenericProperty()
s._name = 'address.zipcode'
GDSDocument.query(s == 54321).fetch()

Note that if you are querying on properties below the top level, you cannot do the more standard

GDSDocument.query(GenericProperty('address.zipcode') == 54321).fetch()  # fails

due to a limitation of ndb

If you need to get the json back from a GDSDocument, just do this:

json = lgdsDocument.to_dict()

Denormalized Object Linking

You can directly support denormalized object linking.

Say you have two entities, an Address:

{
    "key": "1234567",
    "type": "Address",
    "addr1": "1 thing st",
    "city": "stuffville",
    "zipcode": 54321
}

and a Person:

{
    "key": "897654",
    "type": "Person",
    "name": "Fred"
    "address": // put the address with key "1234567" here
}

You’d like to store the Person so the correct linked address is there; not just the key, but the values (type, addr1, city, zipcode).

If you store the Person as:

{
    "key": "897654",
    "type": "Person",
    "name": "Fred",
    "address": {"key": "1234567"}
}

then this will automatically be expanded to

{
    "key": "897654",
    "type": "Person",
    "name": "Fred",
    "address": 
    {
        "key": "1234567",
        "type": "Address",
        "addr1": "1 thing st",
        "city": "stuffville",
        "zipcode": 54321
    }
}

Furthermore, gaedocstore will update these values if you change address. So if address changes to:

{
    "key": "1234567",
    "type": "Address",
    "addr1": "2 thing st",
    "city": "somewheretown",
    "zipcode": 12345
}

then the person will automatically update to

{
    "key": "897654",
    "type": "Person",
    "name": "Fred",
    "address": 
    {
        "key": "1234567",
        "addr1": "2 thing st",
        "city": "somewheretown",
        "zipcode": 12345
    }
}

Denormalized Object Linking also supports pybOTL transform templates. gaedocstore can take a list of “name”, “transform” pairs. When a key appears like

{
    ...
    "something": { key: XXX },
    ...
}

then gaedocstore loads the key referenced. If found, it looks in its list of transform names. If it finds one, it applies that transform to the loaded object, and puts the output into the stored GDSDocument. If no transform was found, then the entire object is put into the stored GDSDocument as described above.

eg:

Say we have the transform “address” as follows:

ltransform = {
    "fulladdr": "{{.addr1}}, {{.city}} {{.zipcode}}"
}

You can store this transform against the name “address” for gaedocstore to find as follows:

GDSDocument.StorebOTLTransform("address", ltransform)

Then when Person above is stored, it’ll have its address placed inline as follows:

{
    "key": "897654",
    "type": "Person",
    "name": "Fred",
    "address": 
    {
        "key": "1234567",
        "fulladdr": "2 thing st, somewheretown 12345"
    }
}

An analogous process happens to embedded addresses whenever the Address object is updated.

You can lookup the bOTL Transform with:

ltransform = GDSDocument.GetbOTLTransform("address")

and delete it with

GDSDocument.DeletebOTLTransform("address")

Desired feature (not yet implemented): If the template itself is updated, then all objects affected by that template are also updated.

Deletion

If an object is deleted, then all denormalized links will be updated with a special key “link_missing”: True. For example, say we delete address “1234567″ . Then Person will become:

{
    "key": "897654",
    "type": "Person",
    "name": "Fred",
    "address": 
    {
        "key": "1234567",
        "link_missing": True
    }
}

And if the object is recreated in the future, then that linked data will be reinstated as expected.

Similarly, if an object is saved with a link, but the linked object can’t be found, “link_missing”: True will be included as above.

updating denormalized linked data back to parents

The current version does not support this, but in a future version we may support the ability to change the denormalized information, and have it flow back to the original object. eg: you could change addr1 in address inside person, and it would fix the source address. Note this wont work when transforms are being used (you would need inverse transforms).

storing deltas

I’ve had a feature request from a friend, to have a mode that stores a version history of all changes to objects. I think it’s a great idea. I’d like a strongly parsimonious feel for the library as a whole: it should just feel like “ndb with benefits”).

 

Moore Solar!

Posted in Ranting on December 13, 2012 by Emlyn

The Sun!

The outlook for the world in terms of energy security and global warming seems pretty bleak. But I think there’s a light on the horizon, and that light is the sun. Amongst all the renewable energy areas, solar seems to me to be the only one with exponential characteristics (cost reductions, adoption), and the only one that can tap the consumer capital (trillions/year in the US alone).

I thought I’d have a go at quantifying the picture. This is fairly BOTE stuff, but still interesting. Note also that I’m not saying Solar is definitely what we are going to do, but that if we do nothing else, we will do solar.

The technium wants more power, a lot more power. Fossil is not going to do the job from here on out. So a renewable buildout will happen, and it’ll blow what we have now out of the water. Our future is not energy poor.

Anyway, enough of the rhetoric, here are the calcs:

——

From reading/watching Saul Griffiths, things can seem gloomy. See his slides here.

He says, amongst other things, that for us to stabilize at 2 degrees C temperature rise, we need to move entirely to renewables over the next 25 years.

We use 18 TW of power on the earth at the moment. He’d like us to actually reduce our power usage, but I’d say we’d have to plan to do about double that, at least.

18TW - breakdown of use

Total energy available

In energy terms, 18TW is about 155,880TWh / year. So we need to hit a target of roughly 300,000TWh / year globally from renewables (and no fossil) by about 2037. That’s about 34TW.

There’s a lot of talk about solar installation going exponential. Gregor McDonald shows a graph that looks exponential (he says parabolic, not sure why) showing an improvement rate conservatively of 1.6 increase / year in total global solar installation, and some specific figures; eg: in 2011, the total world solar output was 55.7 TWh. Small.

Global Solar Consumption in TWh 2001-2011

But if we extrapolate using an increase of 1.6/year (factor of 10 every 5 years), we get this:

Year Global Solar TWh / year
2010 29.90
2015 365.04
2020 3,827.67
2025 40,136.08
2030 420,857.30
2035 4,413,008.65
2040 46,273,749.60
2045 485,215,432.64
2050 5,087,852,574.96

Year Global Solar TW
2010 0.00 – so small it rounded to zero)
2015 0.04 – looking grim
2020 0.44 – still pretty grim
2025 4.58 – more than current world electrical power of 2 TW
2030 48.04 – more than the 34 TW we need before 2037
2035 503.77 – a stupendous renewable power output, a golden age on earth
2040 5,282.39 – impossible to understand the implications of this, but maybe also we can’t get here.

85,000 TW (see Saul’s slides above) is the cap of all the solar that hits the earth. Something less than this is what we can harvest. Let’s say we can cover 10% of the earth with 10% efficient solar, that’s 850TW. I expect the efficiency will get higher; we can’t expect that what we build out in the 2030s will be as poor as what we use now. So maybe we can go further? Even if we can’t, these are very, very large numbers! Probably enough to get us off planet easily, collecting power in space (the earth doesn’t actually trap very much of the output of the sun!).

So basically this progression will look tragically awful until the early 2020s, when it’ll start looking promising, then we switch entirely to solar, then shortly after that energy becomes totally abundant; what on earth would we do with it all???

The Ballad of John Henry

Posted in Ranting on November 25, 2012 by Emlyn
When John Henry was a little baby boy, 
        sitting on the his papa's knee
Well he picked up a hammer and little piece of steel
Said Hammer's gonna be the death of me, lord, lord
Hammer's gonna be the death of me

This is a post about the destruction and recreation of the world.

I’ve been building a diddly bow. It’s a one-stringed instrument originally invented by african slaves in America’s old south, and the cool thing about it is that you can put it together out of bits of stuff you find around the place; a big plank of wood, nails, bottles, smaller chunks of wood. And some wire, ideally music wire of some kind, really ideally a guitar string.

Now I want to make some more of them, for which everything is easy except the string. I’m using a guitar G-string; I thought today I’d head off to a music shop and grab some more guitar strings, and maybe one for a bass guitar?

So anyway, I went into the city (Adelaide, South Australia), and schlepped into Allens Music, in Gawler Place, right in the CBD. Anything from Allens is going to cost an arm and a leg, but hey, they’ll have some strings, and how much can they be? Note that I have never bought guitar strings, but they seem to be about $10 for a set online, give or take, so how bad can it be?

Actually I hardly ever buy any musical gear from a shop. I buy everything online. I would have bought the strings online, but I needed to get this project moving.

The captain said to John Henry
I'm gonna bring that steam drill around
I'm gonna bring that steam drill out on the job
I'm gonna whup that steel on down

When I arrived, the place was a shambles. Down mouthed, black shirted sales people, scrubbing stuff off the windows, packing stuff up. It’s a three story shop (ground floor guitars, basement synths and mics, upstairs pianos, sheet music, classical instruments). Today, upstairs and basement gone, shut down, and ground floor was like a garage sale.

“No guitar stuff left” says the guy at the till. I asked them a bit about it. No, they’re not moving. Apparently Allans Billy Hyde has completely collapsed, is being entirely shut down.

From this link:

The receivers of collapsed music chain Allans Billy Hyde have announced all the company’s stores will close and more than 500 jobs will be lost as a result.

Brendan Richards of Ferrier Hodgson said in a statement Australian Music Group Holdings will be shut down.

“The loss of jobs is disappointing, but we exhausted all avenues and there is no other way forward for this business,” he said.

“These people have served music lovers and been a key part of the Australian music industry for generations. It is a sad day for live music in this country.”

The 513 staff will be made redundant over the next few weeks. All the 27 stores will be closed.

The company collapsed earlier this year, after an injection of capital failed to save the business and investors called in receivers. It had been experiencing trouble for years, with the two groups, Allans and Billy Hyde, creating a new chain in 2010.

So I guess everyone there was on the last day of their job.

John Henry told his captain
Lord a man ain't nothing but a man
But before I'd let your steam drill beat me down
I'd die with a hammer in my hand

John Henry is a great song that I’ve just discovered recently. The story is a classic American style; a heroic big man, greater than ordinary mortals. Wikipedia says:

John Henry is an American folk hero and tall tale. Henry worked as a “steel-driver”—a man tasked with hammering and chiseling rock in the construction of tunnels for railroad tracks. In the legend, John Henry’s prowess as a steel-driver was measured in a race against a steam powered hammer, which he won only to die in victory with his hammer in his hand.

He’s got a touch of Paul Bunyan to him, but he’s a tragic figure. Mightier than the steam drill, he succumbs to his exertions, while as we know the steam drill digs on.

John Henry asid to his shaker
Shaker why don't you sing
Because I'm swinging thirty pounds from my hips on down
Just listen to that cold steel ring

There’s an obscure name for the computer that is sometimes used by computer scientists: the General Purpose Machine. It’s called that because this machine, unlike all others before it, can be programmed to do anything that any special purpose machine you might conceive of could do. So it solves an entire class of problems in one shot (as long as we hand-wave around the programming). And that entire class of problems turns out to be anything we can automate. And that in turn might turn out to be Everything.

Oddly enough the world hasn’t actually figured this out, and you can forgive it for that. Progress is slow (in day to day terms) and uneven, because, after all, we can’t hand-wave around the programming, and Moore’s Law is fast, but it’s doesn’t seem that fast when you’re slogging through your day-to-day life. Some days you mightn’t even think about computers at all!

Now the captain said to John Henry
I believe that mountain's caving in
John Henry said right back to the captain
Ain't nothing but my hammer sucking wind

And we also seem to have a bias toward believing that our institutions are solid. They always have been before, right?

And then, the newspapers stumble. Encyclopedias, well is that still a business?

Let’s not even mention the music publishing industry! Oh, ok, let’s.

There are a lot of theories about what’s brought the music publishing industry undone. Piracy! Apparently limewire alone owes the music industry 75 Trillion Dollars. That surely is a lot of money.

Or how about iTunes, killing the music industry with kindness? Jon Bon Jovi thinks Apple killed music by making it less like when everything totally sucked. But some people who aren’t total idiots have made a better argument that the revenue from digital is just nothing like that from physical media:

I have another idea about this. My own experience is that suddenly, especially since about 2005 when Youtube started (yep, only 7 years ago), I have access to more or less the entire back catalogue of recorded music history. For free. So now instead of listening to the radio, and/or buying the “latest” stuff, I’m listening to this cornucopia of music from the last hundred years. And I see everyone I know doing the same thing. So I hear of a band or a musician that I like, then follow back through their influences, to those people’s influences, and etc until I’m listening to blues from the thirties, or early vocalese, or watching Mahalia Jackson on the Nat King Cole show.

And it’s clear from this vantage that the music industry’s modus operandus has always been to promote the new before the good. Keep taking the old stuff off the shelves, replace it with the new stuff. Keep hammering you with this month’s prodigy, while you whip last month’s out the back door.

Only we don’t have to put up with that any more.

Without this disruption I’d never have heard of John Henry.

Now the man that invented the steam drill
He thought he was mighty fine
But John Henry drove fifteen feet
The steam drill only made nine

What I do is, I make steam drills.

In the tech industry, we’re always disrupting someone. Someone’s job is going, someone’s industry is being replaced/canibalised. It’s a ruthless, heartless kind of thing.

We don’t notice because we disrupt ourselves most of all. My own career path is nothing like anyone’s I’ve seen from other industries. I think of myself as a professional forgetter. I have to pick up new technologies at a moment’s notice, pick up new languages, pick up entirely new paradigms of computing (eg: Platform as a Service).

And then drop them. These skills used to last 5 to 10 years. Now they last, what, 3? 2? The normal technical skill useful lifespan is shortening, towards the minimum timespan required for mastery.

It’s an industry littered with those who couldn’t keep up. Hordes of one-language techies, one trick ponies, stuck in legacy land or eventually on the scrap heap. Legions of used-to-be-programmer managers, with slowly aging skills, also staring irrelevance and unemployability direct in its dead eye. Mountains of legacy code, built for a time long ago (eg: 2007).

Eventually of course we’ll all put ourselves out of work for good. But when we do that, everyone else will come with us. Intelligence will be something you download on your whatever-phones-are-by-then.

John Henry hammered in the mountains
His hammer was striking fire
But he worked so hard, it broke his poor heart
And he laid down his hammer and he died

Here’s the smartcompany article about Allans Billy Hyde from waaaay back in August:

Another retailer has bitten the dust. Australian Music Group Holdings, trading as the iconic Allans Billy Hyde brand, has been placed in receivership.

Rumours of the company’s demise were floating around in March, but joint managing director Tim Mason told SmartCompany at that time the company had received an injection of capital, debt was reduced, and the business was trading fine albeit in rough conditions.

The fact a business of this size and calibre – it owns 25% of the market – can be struck by the retail downturn demonstrates the strength of pessimistic consumer confidence but also the ramifications of offshore retail.

Ferrier Hodgson confirmed yesterday the company had been placed in receivership – and those same harsh retail conditions are to blame.

“Things are pretty rough in retail right now,” Ferrier Hodgson partner James Stewart told SmartCompany.

“The business had been recapitalised as far back as March, and the business was not travelling at levels the people in charge would have liked it to travel at. The stakeholders decided to call it a day.”

“Our intention is to seek a buyer for the business as soon as we possibly can.”

The odd thing is, I never noticed until now, and that only by accident. Or perhaps it’s the opposite of odd?

Now John Henry had a little woman
Her name was polly Anne
John Henry took sick and had to go to bed
Polly Anne drove steel like a man

I took my darling wife back to Allans to pick over the almost bare carcass. She said to me guiltily that she feels a little responsible. She buys a lot of sheet music for singing students and choirs, but it’s all from SheetMusicPlus. Or else free from the choral public domain library and similar. In the end, it’s cheaper and more convenient.

Of course this idea of buying sheet music makes no sense either. SheetMusicPlus fills a niche, but it wont last, like everything else. Something could happen sooner, but eventually pieces of paper will be replaced in choirs and orchestras by the stupidly cheap eink based A4 sized descendants of today’s tablets. They might stick around selling DRM infested skeuomorphic bundles of faux paper to tablet owners, but how long can such an artificial situation really continue? Depressingly, longer than is warranted, but it’s by no means permanent.

***

We’ve seen the slide of the music industry, encyclopedias, newspapers.

Now we’re watching cars, movies, retail, univerities, all stumbling.

Look forward to this for formal schooling, world finance, and governments.

John Henry had a little baby
You could hold him in the palm of your hand
And the last words I heard that poor boy say 
My daddy was a steel driving man

So the cool thing was, that from the husk that once was Allans Music, I bought some strings for an orchestral bass. Apparently they normally cost $270 at Allans, for a set of 4!, but you can get them from Amazon for $115. Probably there’s a better price if you shop around a bit.

It seems like sacrilege to use these strings on a didley, but they’ll make a great sound, and Allans sold them to me for $30. Because they’ve gone broke. Because the world is quietly getting on with the upheaval of everything, even stuff you like, even stuff you think the world is entirely built on.

So every Monday morning
When the blue birds begin to sing
You can hear John Henry a mile or more
You can hear John Henry's hammer ring
Follow

Get every new post delivered to your Inbox.