What is Computer Science? (Overview)

After a long period of research I’m happy to report Art of the Problem’s third episode is in production. This episode will act as the final piece of a CS trilogy. Here is the first video which gives an overview of the series: (or watch on YouTube)

I’ve also published an essay version of this video with extra links below (or read on Medium).


 

Around 100 years ago something really exciting was happening.

An age old question, “what is the limit of knowledge?”, collided with a modern mathematical one, “can knowledge be mechanized?”. This gave birth to a new field: Computer Science.

The science of computation.

Image by: Mark Phillips & Cameron Murray

The dream of mechanizing human knowledge was inspired by developments over 300 years ago during the early stages of the industrial revolution. Factories were being developed as industrialists began to explore mass production and the mechanization of labour.

http://digitalcollections.nypl.org/

Gradually, machines were designed to replicate human action. But, replicating human mental processes remained the stuff of dreams. This began to change with the development of machines that could add, multiply…and eventually make decisions…

At first even the most advanced machines were limited to one specific task. If you needed something else done, you had to build a new machine. However, around 200 years ago a great industrialist thinker, Charles Babbage, was dreaming of a general symbolic machine. One that answer complex questions, by breaking them down into smaller questions of logic and arithmetic, and braiding them together.

Babbage’s Difference Engine: https://en.wikipedia.org/wiki/Difference_engine / Image by: Brit Cruise

His dream was never realized, but his driving question remained: Was it possible to build a general machine that could answer any question?

By the 1900’s mathematicians and philosophers were posing this question in different ways. Mathematicians asked: What are mechanical machines capable of? How powerful could they be? Philosophers asked: What are the limitations of mechanical machines? What will machines never be able to do?

Image by: Brit Cruise

In 1936 Alan Turing bridged this divide with a paper which revolutionized our understanding about what machines could do. Turing outlined blueprints for what he called a universal machine. A machine that could answer anything answerable.

from this paper: https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf / Image by: Brit Cruise

Part of his great insight was that the power of the universal machine would always reside in the instructions it followed (later known as software), not the physical design (or hardware). Using a simple language his machine would run any instructions that were imaginable.

http://digitalcollections.nypl.org/

The year after Turing’s paper a young Claude Shannon completed a masters’ thesis describing a clever insight he had about telephone relays. He realized he could arrange electrical switches in various ways to perform the fundamental operations of logic, automatically, using electricity.

From this paper: http://www.cs.virginia.edu/~robins/Shannon_MS_Thesis.pdf

Suddenly it was theoretically possible to build a universal computer, powered by electrical clocks that buzzed away [at near the speed of light] and followed any instructions you provide.

However, key questions remained: what can these machines do, and what can they never do?

In the decades to follow, computing machines grew in their speed of operation and memory capacity. Suddenly many hard questions faced by humans became easy (or very practical) for computers to answer very quickly.

https://en.wikipedia.org/wiki/Manchester_Small-Scale_Experimental_Machine

But deeper problems emerged…

There seemed to be a growing set of seemingly easy problems (such as, is a given number prime) that were computable on our machines but took too long when the questions were large (such as, is 10⁹⁸ + 1 prime?). It could take thousands or even millions of years for the computer to give you an answer. Or, halt.

Image by: Brit Cruise

These problems were practically impossible to solve. Think of these ashard problems.

And people considered drawing a line in the sand between problems that were easy (practical to solve) and problems that were hard (practically impossible to solve). The attempt to precisely define the division of easy and hard (practical and impractical problems) leads to the most important unsolved question in computer science today.

Image by: Art of the Problem

What makes hard problems, hard?

Is it a result of some underlying mathematical pattern?

OR

Is the perception of hardness merely an illusion?

Will new insights make existing hard problems, easy?

This question is not just an intellectual curiosity, the backbone of the internet depends on a set of problems being “out of reach” of our machines.

But to begin we must step way back and revisit the mythology of ancient oracles….

What is knowledge? And what does it mean to think….or, “to compute?”

Image by: Brit Cruise

Part 2/10 coming soon…

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: