Follow up to Deep Learning 2.0

In my previous post Deep Learning 2.0 I outlined a system for enabling deep learning on the ‘net. The system consists roughly of:

Know: A unit of knowledge. A short name and description of something someone can know
Learn: A “recipe” for learning a know. Information might be inline, or references to external sources (eg: MIT OpenCourseWare, Wikipedia, blogs, etc).
Person: Someone with a set of Knows, who can undertake Learns to get more Knows.

Knows have a set of Learns that can lead to them. Learns have a set of prerequisite Knows that you need before you can acquire the Learn.

The collection of Knows and Learns is crowdsourced. The links between Knows and Learns are weighted by some kind of voting system (probably a simple Like system).

Some new thoughts on this:

1 – By Learners, not by Teachers.

In early online discussions about this concept, a lot of objections were raised by professional teachers. How can you crowdsource learning paths? There is no concept of authors being qualified. How can you be sure that people have the knows they say they have (accreditation problem)?

I realised that this is not an idea that makes sense to teachers, who like carefully curated courses that teach whole areas at once, to students who just accept what they are being taught. Rather, this is a system for autodidacts, which should be constructed by autodidacts. For learners, by learners.

2 – Personal learning toolset

An idea like this, as with any online social tool, has the bootstrapping problem; how do you get a minimal amount of content in there, to make it useful? I think the right approach is like that used by the social bookmarking sites. Social bookmarking sites provided a useful service (personal online bookmarking) that became more powerful as more people used it, because they could begin to aggregate the personal collections into useful social collections via tags.

DL2 could provide a toolkit for tracking your own progress through unknown territory, from what you know to what you don’t, through the tools of learns and knows, and if this is data is public then you could draw from and connect with other people’s learns and knows as they come to exist over time. So it begins as a personal learning tool, the autodidact’s friend, and builds out into a crowdsourced deep learning knowledge base. This also satisfies the vision “for learners by learners”.

3 – An idea for enhancing the crowdsourcing of learns

If you have a big chunk of learning material (say the size of a chapter of a book), it could be very difficult to decide what the dependencies are (what Knows should be required). So break it down.

Assume the body of the Learn is available inline. Then you could allow people to mark up on a sentence by sentence basis what knows are required; ie: attach a required Know to a sentence. Also allow at higher levels; paragraph, section, etc.

So a Learn then depends on the union of all these sets of knows.

And then you can apply this value: everyone creating or editing learns should strive to minimise the dependencies. It should depend on what it needs to depend on, and no more.

People can then look at learns, and edit them over time to remove unwarranted dependencies, right down at the low level of sentences.

An example: In the wikipedia article on Distributed Hash Tables (http://en.wikipedia.org/wiki/Distributed_hash_table), we have the following sentence:

“A key technique used to achieve these goals is that any one node needs to coordinate with only a few other nodes in the system – most commonly, O(log n) of the n participants (see below) – so that only a limited amount of work needs to be done for each change in membership.”

If this was in DL2, it’d need to be tagged with a required Know ‘Big O Notation’ (http://en.wikipedia.org/wiki/Big_O_notation), a branch of maths used in computer science to classify an aspect of the behaviour of algorithms.

Using Big O Notation here is succinct, but perhaps it isn’t necessary? If it could be rewritten without it, you’d remove a frankly onerous dependency.

But you might lose depth. Perhaps the basic text of the learn could be marked up with optional flyouts, which could have harsher dependencies, but being optional makes the dependency optional to the learn as a whole. eg:

“A key technique used to achieve these goals is that any one node needs to coordinate with only a few other nodes in the system so that only a limited amount(1) of work needs to be done for each change in membership.”

flyout (1): ” most commonly, O(log n) of the n participants (see below)” (dependency Big-O-Notation)

So the approach is to allow people to refine and simplify at the sentence-by-sentence level, and thereby make it easier to make a useful edit, easier for the crowd to participate.

4 – Crowdsourced Learns & Knows vs Personal Learns & Knows

Section 2 above assumes personal, owned Learns and Knows created through a personal learning tool. Section 3 on the other hand talks about crowd constructed, publicly editable wiki-like learns and knows. How can we use/combine both concepts? I’m not sure, it needs more thought.

About these ads
Follow up to Deep Learning 2.0

8 thoughts on “Follow up to Deep Learning 2.0

    1. emlyn says:

      Wikipedia seems like a great start. It’s not marked up with dependencies though, which is crucial imo. I was thinking of building my own.

      Ooh, brain spasm: Build a feature for sucking in a wikipedia article in easy-to-markup-with-dependencies form. Hmm!

  1. Seems like the right approach would be to create a lot of fine grained learns, then automatically assemble an article based upon the prerequisites that a reader is known to have. As he reads the article, he can indicate “I already know Big O notation” so that is a known prerequisite before hand in the generation of future articles, or the article can be regenerated with that prerequisite known.

    This will result in rather machine-like articles, not smooth to read, but perhaps informative.

    1. emlyn says:

      I like this idea. Actually presenting it as one big page might not always be practical, but it has something to it.

      I really like the idea of them clicking “I already know this”. I think you’d remodel the article immediately based on that kind of input. So you start with giant articles for people without much profile, and they seamlessly prune it back until what’s left is what they need to know. Awesome. Thanks Kelly.

      Hey, could you please have a look at my new post, “Calling all Autodidacts”? I’m looking for input on how people do their autodidact thing.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s