Internet Censorship in Australia – what are they thinking?

This was a comment on Russell Blackford’s blog, Metamagician and the Hellfire Club. He was rather irritated about the internet censorship trial underway in Australia, and particularly about the minister responsible and his performance on an episode of the ABC’s Q&A program. I watched it and wrote the following:

I just watched that Q&A episode, and found it illuminating.

I think Steven Conroy believes in what he is doing, and not for weird right wing / religious reasons. If you listen carefully to what he says, you’ll notice that he is starting with different axioms than most of the rest of us are.

He was defending the filtering trials as just being the same as what we have for books, movies, etc, and pointed out that this blacklist is a pre-existing thing, part of our regular censorship process (previously unenforced/unenforceable).

Imagine you believed in the current censorship regime. Your premise is something like “there are bad things out there that decent people shouldn’t have to see, and it is part of the government’s job to sort that out”. It’s feels anachronistic now, but it’s a mainstream conservative position, that comes out of mainstream conservative values. It also relies on some related unspoken assumptions (part of that value system) relating to the importance of hierarchies, respecting authority, maintaining order, that kind of stuff. Jonathan Haidt explains this better than I ever will: http://www.edge.org/3rd_culture/haidt08/haidt08_index.html .

Saying that this is attacking free speech will not impress someone with this value system. You have to hold a politically small-l liberal set of values to care about freedom more than order. He will hear people saying “free speech” and just think “this person doesn’t know what is good for them”.

What no one articulated in that program were these things:

1. The problem that the government is trying to solve with censorship is indeed a problem, that is not disputed. Things like child pornography are truly bad and wouldn’t exist in a perfect world.

2. However, the only way the censorship solution can work is via an unaccountable body doing the censorship on everyone else’s behalf (Conroy: “If we published the list, it would defeat the purpose of the list”). We do not trust them to do that job. It’s not that we don’t trust Steven Conroy, or the current government bodies in this area in particular, it’s that no one can be trusted with that job in general. That is the fundamental premise of free speech; not that we want child porn, but that there is no way that we can guarantee censorship without abuse.

3. This censorship push is being partly justified by saying “it’s just more of the same, same as with movies, books, games”. What needs to be clearly articulated here is that we don’t believe in that, either. There is no more reason to trust censors with books and movies than with the internet.

So this, I believe, is an old-school liberal vs conservative clash. They believe we need to be protected from ourselves, we say no one can be trusted with that job, particularly anyone who volunteers for it! All the stuff about slowing down the internet, we’ll just be able to circumvent it anyway, etc etc, is a sideshow.

I’d like to see opposition to the filtering grow, knock out the filtering push, then roll on into knocking out censorship in this country altogether.

Internet Censorship in Australia – what are they thinking?

Esteso Voce Concept #1

Esteso Voce Interface #1

(Update: Next post is here, previous is here)

I am in the process of inventing a new musical instrument, called the Esteso Voce.

In my previous post I talked about the conceptual path leading to the Esteso Voce. In this post, I detail my first concrete concept for the instrument. To the left is the first shot at the interface (think touchscreen), which I will explain below.

What I didn’t make clear in my previous post was the original inspiration for this instrument. My idea was to be able to extend the voice, using external equipment, such that a single singer could sing a motet or madrigal by themselves.

Now obviously this is not easily possible, in that the different lines generally have different rhythm and words. What I think is achievable via the Esteso Voce is singing homophonic pieces (eg: Taverner’s The Lamb) and pieces in strict canon, also allowing harmonic variation (eg: Byrd’s Non Nobis Domine, which includes canon-style repetition, with some lines in parallel organum fifth). Also a combination of canon and intricate harmonic transposition in realtime is achievable, but I don’t know of any existing music that does this.

(Incidentally, I just listened for the first time to that recording of The Lamb linked above, it’s the best choral singing I’ve ever heard.)

Esteso Voce Flow Diagram

Architectural Overview

The Esteso Voce is in essence a vocal harmoniser combined with delay functionality appropriate for singing in canon. It takes a single vocal input, copies it up to 3 times, delays those copies as desired by the player, and then transposes the copied voices up or down in pitch, as controlled in real-time by the player. The original voice signal and the copies are then recombined and output.

(I intend the singer and the player to be the same person, but they could of course be two different people.)

In the interface, sets of four vertical lines represent these voices. The bolded, or red line, indicates the source vocal (which is being sung by the singer directly).

Wherever there are three sets of controls in a row, these refer to the three copied vocal lines, not to the source vocal. Wherever the source vocal is, the control from such a set of three is affecting the voice on the opposite side of it from the source vocal. Where this effect is relative (eg: in the main harmoniser controls) the effect affects the voice on the opposite side from the source vocal, creating an effect relative the the voice on the same side as the source vocal (which may be the source vocal itself).

Primary Harmonizer Controls
Primary Harmonizer Controls

Primary Harmonizer Controls

The Primary Harmonizer Controls are the main controls for the player to control the transposition of the other vocal lines.

In the diagram, the source voice is the voice 2, the second vertical line (the red one). The line to its left is voice 1 and represents a vocal line which will sit below it in pitch. The two lines to its right (voices 3 and 4) represent two vocal lines; the leftmost of these sits above the source line in pitch, and the rightmost sits above that.

How far the lines are transposed is determined in real time by the circle touched by the player in the appropriate column. Voice 1 is controlled by the column of circles between it and the source voice (each represents a number of semitones to transpose the source down by). Voice 3 is controlled by the column of circles between it and the source voice (each represents a number of semitones to transpose the source up by).

Voice 4 is controlled by the column of circles between it and Voice 3, and the circles represent the amount to transpose Voice 3 by, in order to create Voice 4. This is the same as adding together the transposition amounts for the middle and right hand columns (for Voice 3 and Voice 4), and transposing the source voice by that many semitones.

These controls are intended to be realtime. So, the circle touched in any column has effect while it is touched, and ceases that effect as soon as it is no longer touched. If multiple circles in any column are touched simultaneously, only one is used (the most recently touched? the least recently touched? something else?). If no circle is touched in the column, then the assumed value to transpose by is zero semitones (as if there was a top most row of circles for zero, one of which was being touched).

Next blog, I’ll talk about the rest of the interface, including the Secondary Harmonizer Controls, the Auxilary Controls, and the Delay Controls.

next

Esteso Voce Concept #1

Esteso Voce (preamble)

(Update: Next post is here)

I was having trouble at the start of the year figuring out what projects I’d do this year. So I just rambled back and forth amongst the ideas I had (I’ve posted about them previously here). Finally, obsession emerges from ennui, and I know what it is that I’m going to do.

The project is codenamed Esteso Voce (“Extended Voice”, italian speakers please correct me and I’ll fix it). The intent is to create an instrument which extends what a highly trained singing voice can do, allowing the singer (me!) to achieve sounds and music which could not be achieved unaided, but only including sounds which would be judged by an expert in voice as “authentic acoustic sounds”.

Some explanations to come, but first a little rundown of the history of this idea.

A few years ago (2006 it turns out) I had this idea for making the “poly voce”. I really didn’t know much about digital audio, so I conceptualized it from whole cloth.

Here’s the original document where I laid out the vision/requirements as I saw them.

Realtime Vocal Ensemble Instrument

I sent this to Tyrell Blackburn (a classmate of Jodie’s at the time at Adelaide Uni), and he gave me very interesting feedback, the most important of which was to point me at Max/MSP.

I was just going over the email exchange from 2006 (thank you gmail for remembering everything), and noticed that the day after he told me about Max/MSP, I sent him the first prototype working patches for the concept. It was a manic period.

Tyrell also warned me about the concept of using a midi keyboard for relative pitches, (I talk about it in Realtime Vocal Ensemble Instrument), that it would confuse people who were used to the keyboard mapping to absolute pitches (ie: everyone). It turns out that was absolutely true!

Where I managed to get to was that I made successive prototypes (thank you to Christian Haynes from the Adelaide Con Electronic Music Unit for introducing me to the poly~ object), but they had a lot of lag, and weren’t really ever going to be reliable. I didn’t have a decent laptop and audio equipment for the job at the time, so I was using an underpowered PC, which didn’t help (this stuff is CPU intensive).

I wish I’d made a recording of it in action, but I never did. What I have found are the first prototype patches that I sent to Tyrell, I probably have the others lying around.

Now, the main thing I found out from this process was that this has already been done, this kind of effect is called a Vocal Harmonizer. Christian sent me a link to this, for example. Really amazing stuff. So, I lost my head of steam a bit, thought about buying some off the shelf effects, but never got around to it.

What I also learned in the process was about the PSOLA algorithm (well, I learned only that it exists, and that it requires real-time pitch detection, so all my ranting about no pitch detection is somewhat nullified, but I’ll live). I used a free PSOLA implementation.

Some years passed. During that time I’ve done a few other related things. The most relevant has been a bunch of work with Ableton Live, which I used to make polyphonic music with only my voice, via recording samples in realtime and looping them. Some examples are here, here and here.

The Ableton Live thing shows promise, but honestly I just don’t like that software. It’s just not flexible. There is no user scripting or macros, not much customization for live performance. It has this crappy facility to map keystrokes to actions for using it live, and people go to town with midi->keystroke mapping software, and all kinds of funny stuff, to tie it to foot pedals and the like (and get great results, no doubt about it, eg: this).

And then, I moved to Ubuntu for my main notebook. No more windows only software like Ableton. I really want to embrace the free software scene anyway, and there’s a lot of stuff available for linux, so it’s all good.

And we come to the present. It grabbed me recently that what I want is more than a vocal harmonizer. A vocal harmonizer is an effect, a gimmick.  What it does is good, but the context is wrong.

What I want to build is an instrument, not a piece of software, not a set of effects, not a sound module. The aesthetic is that of a violin or a trumpet or a bassoon. Something acoustically rich, simple in concept. Something which you have to learn to play. Something you don’t particularly customize, rather you master it. An instrument is a musical paradigm in itself.

What does all that high faluting stuff mean though? Well, it means that it can be limited, not endlessly accomodating. So, if I want vocal harmonizer functions, I can have a limited number of voices (I’m thinking of 4). Also, an instrument should have its own unique control system, and I think I’ve invented something a little bit weird but cool for this (for next blog post). Finally, an instrument should sound excellent, and define its own sound, not try to sound like something else. In this, I’m bending things a little, in that the sound is the human voice, but I want to stick to the following principle. This instrument is not for making a mediocre voice sound better. It is for taking a great voice further than the human voice could otherwise go. If being easy to use and sounding beautiful conflict, then beauty wins.

The short list of functions this instrument will have are:

  1. A vocal harmoniser, optimising for excellent sound (they should sound like a plausible, unaltered human voice).
  2. Harmonised voices might support gender changes and other timbral modifications, probably this will involve fun with formants.
  3. Voices can be delayed by a multiple of a player chosen fixed interval (think roughly a “bar”).  This allows the player to sing in canon with themselves.
  4. Harmoniser functions and delay functions can be combined.
  5. May want support for just intonation or other non-even tempered tunings.

The devil will be in the details, but the core idea is combining a vocal harmoniser and a simple delay mechanism.

That’s all I’ve got the energy to write tonight. This will be continued.

Oh, the other important thing: I found the free software successor to Max/MSP, called Pd. Miller Puckette is a genius. It looks as though my first versions will be built in Pd.

next

Esteso Voce (preamble)

Blogging Fail and the Cognitive Surplus

Well, I went back to almost full time work (0.9 of full time) and my blogging went out the window. Crap.

I know exactly what caused it, too. I was working Monday afternoon, Tuesday, Wednesday, Thursday. That’s 0.7 . And, crucially, I had devoted Monday morning to blogging. So I had a set time each week to write something substantial.

Changing to 0.9 has meant I now work all business days except Friday morning. Friday morning is also booked up, because I sing in a choir which rehearses then (the Prospect Singers, a community choir run by my darling wife, lots of happy retired folk who own all of their own time, and me).

So, I have effectively sold my blogging time.

I’ve also noticed a massive hit to my personal projects; they just stopped. Full time commitment really hits my ability to do my own thing amazingly hard. And notice, I’ve only increased my work by 1 day per week, which you wouldn’t think was a big deal. But apparently it is.

I think it goes something like this: When I’m working full time, Saturday becomes a recovery day. That only leaves Sunday, and there’s not much of that because family commitments and preparing for the week ahead eat most of it. Also, it’s harder to get motivated at night, because paid work now is a much larger proportion of my waking productive hours, so it dominates my thinking; much more difficult to put work issues aside and do something unrelated.

If you divide the 7 days of the week into 14 half days, 0.7 paid work gives you 7 half days of paid work, 7 half days to yourself. Paid work doesn’t seem to dominate in the same way. I would find that firstly, the paid week was short, so on Friday I wouldn’t have much mental baggage to shed. What there was, would be vanquished by singing on Friday morning, and a leisurely Friday afternoon. Then Saturday, Sunday and Monday morning would be productive mixes of family stuff and my own projects. That turns out to be quite a lot of time, enough so that by Monday afternoon, I was actually looking forward to switching tasks.

I have to stay at this workload for now. But I think I’m learning some useful things. The lesson re: this blog is that it’s not going to work if I don’t schedule some time for it. I’ll do that.

The bigger story is that I need to schedule in time for doing my own thing. I can do it, too. I’ve been watching a lot of tv lately (watching lots of full seasons of really good shows actually), which is easy to slip into, and doesn’t really add to my life. So I think I need to grab a chunk of the cognitive surplus, and make time for my projects of an evening.

Also along with the telly is staying up too late every night, which becomes a vicious cycle. I watch telly until the wee hours, am tired the next day, by the evening am too tired to get it together to do anything useful, only can watch telly, but this is unsatisfying so I do it until late to feel like I have some command over my life, lather rinse repeat. So, the other unintuitive thing is to go to sleep earlier; on the same day as I woke up would be a good start!

It looks like I have a plan:

  • Stop watching tv
  • Start going to bed earlier
  • If I decide to do a project, I must explicitly schedule in time for it

I think these three things will get me back on the path that I want to be on.

Blogging Fail and the Cognitive Surplus