Archive for hartley

Logarithmic Measure of Information (Entropy Primer)

Posted in Video / Theatre with tags , , , on June 1, 2013 by Brit Cruise

It wasn’t until 1920 that the question “how do we quantify information” was well articulated. This video introduces a key idea of Nyquist and Hartley, who laid the groundwork for Claude Shannon’s historic equation (Information Entropy) two decades later. In these early papers, the idea of using a logarithmic function appears, something which isn’t immediately obvious to most students fresh to this subject. If one ‘takes this for granted’ they will forever miss the deeper insights which come later. So, the goal of this video is to provide intuition behind why the logarithm was the ‘natural’ choice…

Link to Khan Academy lesson.

Advertisements

Symbols, Signals & Noise

Posted in Uncategorized with tags , , , , , , , , , , , on April 8, 2013 by Brit Cruise

The following video/simulation was an attempt to bridge the gap between information as what we mean to say vs. information as what we could say. I view this as an important stepping stone towards Hartly, Nyquist and Shannon – which I will deal with next. It covers symbols, symbol rate (baud) and message space as an introduction to channel capacity. Featuring the Baudot multiplex system and Thomas Edison’s quadruplex telegraph.

Play with simulations used in video on Khan Academy:

Symbol Rate

Symbol Rate

Screen Shot 2013-04-08 at 7.40.37 AM

Information Theory, a practical approach

Posted in Research and Projects, Video / Theatre with tags , , , , , on January 15, 2013 by Brit Cruise

In order to understand the subtle conceptual shifts leading to the insights behind Information Theory, I felt a historical foundation was needed. First I decided to present the viewer with a practical problem which future mathematical concepts will be applied to. Ideally this will allow the viewer to independently develop key intuitions, and most importantly, begin asking the right kind of questions:

I noticed the viewer ideas for how to compress information (reduce plucks) fell into two general camps. The first are ways of using differentials in time to reduce the number of plucks. The second are ways of making different kind of plucks to increase the expressive capability of a single pluck. Also, hiding in the background is the problem of what to do about character spaces. Next I thought it would be beneficial to pause and follow a historical narrative (case study) exploring this problem. My goal here is two congratulate the viewer for independently realizing a previously ‘revolutionary’ idea, and at the same time, reinforcing some conceptual mechanics we will need later. It was also important to connect this video to previous lessons on the origins of our alphabet (a key technology in our story), providing a bridge from proto-aphabets we previously explored….

This is followed by a simulation which nails down the point that each state is really a decision path

Screen Shot 2013-01-22 at 8.50.44 AM