Office Of Dr. Beatz

  • facebook
  • twitter
  • googleplus
  • flickr
  • vimeo
  • pinterest
  • linkedin
  • lastfm

BLOG OPEN

Why CS is awesome – WBW post

I recently wrote a fairly long “comment” on one of my favorite forums Wait But Why. The site hosts a weekly discussion section called The Dinner Table where individuals debate topics and hopefully enlighten each other with positive reasoning and sentiment. Last weeks topic was especially awesome, it invited readers to teach the community something they felt they knew a lot about. I wanted to write a short post, but after writing my intended post, I was so excited by how AWESOME computers are, I continued writing some easily digestible excerpts on some of the most fascinating aspects in the digital world. In the continual quest for content creation, I decided to share it here as well. This will be the first in a series I plan to write, teaching basic CS concepts in a jargon-less and humorous manner Hope you enjoy.

-begin original post-

Computer science is awesome, and in today’s world touches nearly every part of our lives. It was a discipline I knew nothing about 4 years ago, and today am working toward my masters in. Despite the view point of many, CS is NOT hard or complicated. Like any discipline, it requires time and patience, and in this case, an incredible attention to detail (a missing semi-colon could bring down this whole site), and an ability to think abstractly.

When I was just starting out, one of my mentors explained to me that all of computing (at least coding and physical computational systems) could be boiled down to 2 concepts: abstraction and caching. (disclaimer: I am not trying to prove the validity of that statement, but I would like any and all insight into its merits.) So, with that in mind lets dive in.

ABSTRACTION In a sentence, abstraction is the concealment of the underlying workings of a piece of code. As a programmer, you don’t want to give away HOW the code works. The rationale for this is several-fold, the largest reason being security. Visualize this; a would be assassin plans to terminate you by way of cutting the brake cables in your car. In a poorly designed car(piece of code), he merely inserts a screwdriver under the front and jiggles a handle, which opens the hood, leaving him free to cause mayhem (this is known as an exploit, and the attacker has now gained control of your system). However, in a well designed car (piece of code), the assassin pokes and prods, jiggles, tampers, smashes, welds and grinds his way for hours, leaving him exhausted and the hood of the car still firmly locked. Thanks to your fantastic code style and rigorous design, you have foiled the assassin and avoided certain death. When we properly hide away the workings of our code, we free it from being exploited (hacked) and also make it modular and reusable.

The above analogy describes one main reason for abstraction, but the second is just as important and the basis for all large scale systems. Abstraction breaks our code into small, manageable chunks. Imagine going to a library and only being able to check out one book that was 7 billion pages long and weighed 4 tons…totally impractical right? The same goes with code. If we break our code into manageable pieces (abstract it), we can use it across many applications, and most importantly, it makes it easier to test for bugs and guard against exploits. You may be thinking, “what is the point of having code that you know nothing about, in tons of pieces all over the place?” We solve this problem through another fundamental CS concept: the Application Programming Interface (API). An API is the way we get pieces of code to interact with each other while still preserving the principles of abstraction. Back to our car analogy. We jump in the driver seat, insert the key and start our engine. The car-key here is the API. To the driver (user), all we see is the key turning(input) and the engine starting (output). We do not see the battery connecting to the spark plugs, the injectors firing a burst of gas into the cylinder, or any other processes because they are all hidden under the hood (abstracted). NOR DO WE CARE, what goes on under the hood. All we care about is getting to the store and back for a cold six pack so we can more properly enjoy reading Wait But Why articles in our underpants. (in the case of a programmer, all we care about is using pre-built, tested, secure pieces of code to construct larger and larger applications). Ever notice how the in app camera for Instagram and Snap-Chat are nearly identical in looks and functionality? This is abstraction at work. When we build something once, we can use it over and over and over. (this process is more commonly referred to as modularity.)

So, if we abstract properly, why are there still hacks on the daily? The simple answer is, we don’t. Most code is written at 4am, jacked on caffeine, or finished on a Friday with no review since you’re trying to cut out at 330 without the boss noticing. Programmers are human, they make mistakes, and they get lazy. In fact, the majority of computer hacks today are a result of human error, through a process called “social engineering” which I will touch on briefly in the last section. We now move to…..

CACHING Caching is a method of moving data closer and closer to the source where it is utilized, from other areas where it is easier and cheaper to store. Got it? No? Alright then….TO THE ANALOGY ROOM!!!!!

You sit at a table in a library, frantically working to finish a term paper. On your table sits a host of books in various piles, post-it notes mark important pages, and you have highlighted other important sentences on the marked pages. All the information you need will eventually get aggregated into your final term paper, but first you must pull all this data from the various books, strewn about in front of you. You have the course textbook that you reference frequently, filled with bookmarks. You have several of the authors’ other books which each have a paragraph or 2 that you plan to use. In a large stack on your desk, you have various other books you pulled off the shelf that might be worth looking at, based on their titles and section in the library you found them in. Then you have the whole rest of the library, and additionally the other libraries on campus that may have other books you need. All of these levels are similar to the levels of a computer Cache.

Lets start from the smallest and work our way out. At the center of this whole book-fiasco lies your term paper, the place where all the meaningful work and citations will end up. This is your computers Central Processing Unit (CPU). It is where the computer pulls all the data it needs to create a meaningful, concise output. On each bookmarked page lies a sentence you wish to quote. This sentence is a segment of data that will contribute to the final product. Therefore, a page would be considered a Level 1 (or L1) cache. It is the closest and most easily accessible piece of data in the scope of all books in the campus library system. Next we have the stack of books. This can be thought of as your computers Random Access Memory (RAM). Ram is fast, but since it is not stored directly on the chip, it takes longer for this data to reach the processor. Next, the whole library you are in can be though of as your computers hard drive. A hard drive is comparatively slow to access (you actually have to get out of your seat and walk to a shelf…..ugh) but can store lots of data. All the preceding pieces make up your own personal computer. Next we can think of the campus library system as a computers Local Area Network (LAN). A LAN are resources located outside your computer, but still somewhat central to your physical location (think about a shared storage drive on your schools network, located in the IT dungeon somewhere on campus). Finally, we can conceive of a service such as Amazon Books, from which you can order ANY book, granted it will take 2-8 weeks for delivery. This can be thought of as the Internet as a whole, all possible information you may need to write your paper.

In reality, accessing these levels takes microseconds(L1 cache), or several seconds(internet webpage). An L1 cache is located directly on a CPU and can store only a few lines of data, which can be accessed nearly instantly. Processors generally have 2-3 levels of cache located on their physical chip. A common L1 cache will be 4KB, while a L3 may be 4MB. RAM in today’s consumer systems range from 4-32GB. Traditional hard drives are passing the 10TB mark, with 4TB being easily accessible and 1TB nearly standard. To put it into perspective, a single MP3 song is about 4MB, and your typical husbands porn collection is 10GB.

So why not just make a huge L1 cache if it’s so efficient? Well, space and cost mainly. The materials associated with smaller levels are more rare, and also as you store larger amounts of data, the size increases to a point where it is impossible or impractical to store data closer to a CPU.

In the process of writing this, I came up with a few more CS fundamentals I will share, cuz lets be real, the procrastination monkey is at the wheel 🙂

DATA STRUCTURES At the heart of every computing problem lies one fundamental question: How do we most efficiently structure data for the quickest and slimmest application possible? Every problem begins with selecting a proper data structure. If you decide to take a CS course THIS IS THE ONE YOU SHOULD PAY MOST ATTENTION TO. Understanding data structures is essential to solving problems and will be asked on every job interview you will have. In researching this post, I found a few sites that explain things quite well, so I will link to those and fill in the gaps.

First, what is data? Data is the fundamental block on which applications function. An application reads and modifies data based on its intended functioning. Data can take many forms: A single bit (0 or 1), a word (cat, dog), a client contact (name, age, address, phone #, etc) and many more. What is important is that we treat each chunk of data as a single, unbreakable unit (hello abstraction our old friend :). When considering data-structures, we do not care whether our data is a word, contact or bit. We treat each of those as a single unit. It is how we want to organize those units that decides which structure we will use.

The 8 basic data-structures are: linked-list, array, stack, queue, tree, heap, hash and graph. (although one can reason several of them are just variations of each other.) You can read all about them at this link. http://cs-fundamentals.com/data-structures/introduction-to-data-structures.php

For fun, lets examine a problem where we need to find the best DS (street lingo for Data-structure) for the job. How do we order tasks for a printer? After careful examination, we would probably arrive at the conclusion that a QUEUE is our best option. A queue takes data (print requests) in one end and spits them out the other end in the same order they entered. A data queue is no different than a queue at the DMV (visualize your binary self next time you need to pass the time in line). A queue may seem like the only option in this case, but consider this scenario: You walk into the grocer, only needing bread and milk. You grab your items and head to the queue to find 20 people in front of you. 19 of them have 1 item, but the person in front has 17 shopping carts, filled to the brim with single baby food jars and single pistachios that must be weighed individually. While the person at the front did arrive first, the 19 people behind could all be checked out a 10th of the time. If we are looking to OPTIMIZE customer wait times (and overall satisfaction), a queue might not be our best option after all.

This leads to the next section:

TIME/SPACE COMPLEXITY and EFFICIENCY (BIG O NOTATION) When we talk about algorithms, we need to change our language in regard to comparing their pros and cons. WAIT!!! back up… WTF is an algorithm? Glad you asked little Timmy. An algorithm is yet another building block of the ever expanding realm of computation. An algorithm, simply put, is a series/sequence of instruction or actions. Your daily life consists of algorithms. Your “morning routine” algorithm probably looks something like this: -alarm goes off -open eyes -get out of bed -put on pants (optional) -eat breakfast (also optional but highly recommended by 1 of 1 Doctors) -leave house to get to work

You are literally surrounded by algorithmic processes: reading a book, doing the dishes, following directions on GPS, baking/cooking, EVERYTHING IS AN ALGORITHM (BTW a great book title you can have for 7% royalty fees). However not all algorithms are created equal. For instance, in our morning routine algorithm, we could have chosen: -alarm goes off -open eyes -close eyes -sleep 30 more minutes -open eyes -stare at ceiling contemplating outfit for 25 minutes -get out of bed -put on outfit totally different then the one you decided on above -leave house to get to work All these steps will lead you to the same conclusion (leaving house and getting to work) but the 2nd option is probably going to result in a less positive outcome.

When we talk about algorithms, we need to have a basis for comparison. How about: the time it takes to complete the task? Well, lets envision my grandmas computer from 1984 taking 30 minutes to load a web-page that my iPhone 6 will load in 12ms. It’s the exact same task, but depends on the physical constraints of our machine….time just wont do for comparison. Well what about size of the code it takes to run our algorithm? This again is misleading, as a single java(programming language) command can do the work of 20 lines of C(programming language) code, but you can almost guarantee the C code will run faster because of the efficiency and design of the C language. Discussing algorithm complexity must then boil down to a comparison to other algorithms. Lets again work backwards. Let us envision the LEAST efficient algorithm possible: -alarm goes off -open eyes -close eyes -sleep 23.999 more hours -leave house to get to work We have taken the most possible time we could to complete the task of waking up and leaving the house (not to mention, you probably don’t have a job now). However inefficient, we DID complete the task of leaving the house for work. Our algorithm did serve its intended purpose, albeit in the crappiest way imaginable. In CS this is called the “Naive solution”, or the “brute force” approach. This solution will work, but we can do better. How about if we instead, place our bed on a catapult in the entrance-way. When our alarm goes off, the catapult springs us out the (hopefully) open front door. Our new algorithm looks like this: -alarm goes off -leave house for work Given this sequence, we have now created the fastest possible way to complete the task of leaving the house for work (if you rule out time travel, lasers, flying monkeys etc, lets just keep it simple for now). Given our above algorithms, we can organize them on a spectrum fastest to slowest.

For computer algorithms, the same process exists. We can organize algorithms from the best to worst, however instead of just time as our basis, we organize them via the size of input RELATIVE to the time the algorithm takes to finish. The size of our input is referred to as “n” and this is the Basis for Big O Notation. This is how we talk about algorithms compared to other algorithms. Our spectrum here runs from constant time O(1), to exponential/factorial growth O(n!), in other words, the time is constant despite the input size to, you could literally die in the time it takes to process all the data you inputted.

This has been a massively simplified (but hopefully very accessible) intro to complexity. Here is a link to get the full on details if you want. https://www.interviewcake.com/article/big-o-notation-time-and-space-complexity

As with many aspect of CS, complexity theory lies in the realm of theoretical computer science, and this is where I like to send a bulk of my free brain cells (as I imagine most readers would as well). So just to cap off this lets touch on:

COMPUTATIONAL THEORY IMHO, the world doesn’t get much cooler than computational theory. It touches on everything. Despite my not being able to comprehend some of the more advanced topics, I do love to dive into the Wikipedia pages and try to fit my brain around it. Computational Theory relies heavily on math and proofs, which I don’t much care for, but let me highlight a few of the more insane concepts that you can explore deeper for yourself.

Cryptography: Crypto is the branch of CS that works with security. It is the reason you can transfer money online, and has caused such a stir lately with online privacy. Lets kick it off with another visualization that sums up crypto super simply: You have a stack of playing cards, of which you must select one. From there you may place it into a sealed envelope, a locked box or leave it out on the table. Leaving it on the table is how most internet traffic is transmitted, right out in the open, unencrypted, for all to see. If you place it in the envelope, we now step into the idea of “tamper resistance”. Sure, someone who opens it saw your card, but at least you KNOW they saw it, and now you can plan accordingly. The locked box represents encryption. Encryption is largely based on the properties of prime numbers and how hard they are to factor. Lastly, an option most do not consider, we may place our card back into the shuffled deck, leaving a potential eavesdropper with a 1/52 chance of knowing our card. This is a concept called “perfect secrecy” and systems lacking this property are vulnerable to attack. If you take one thing away from cryptography it should be: NEVER design your own security protocols. Choose ones that have been tried and tested for years or decades. Everyday you can read about a new hack or exploit that undermines a “perfectly secure system”. In the words of Han Solo “Don’t get Cocky.”

Quantum computing: Holy F#*king S#it Batman, this stuff is insane. As secure as our digital world may seem, quantum computing could overturn all of that. One reason crypto works so well is that it is built around protecting against an attack by planning for the worst case scenario and multiplying the complexity by billions and trillions. Its one of those “if you tried a billion possible passwords per second (which the fastest super computer can do), it would take you 17 lifetimes of the universe to crack a code” type of things. But enter quantum computing and that all goes out the window. Quantum computing lets us store not just a 0 or 1 in a bit, but infinitely many things in a bit (massive simplification). Wow, just WOW.

The halting problem: A foundational question in CS that was proven by Alan Turing, a staple of modern CS: given a computer program and input, can we know for sure if the program will ever halt? The answer is NO. This seemingly simple question has spawned entire branches concerned with decidability and classification of problems. Another example is the “traveling Salesman” problem: given a bunch of cities on a map, figure out the shortest distance you can travel to hit all the cities and arrive back at your hometown. You may think there is some secret to finding this out, but there isn’t. The fastest, and only solution is the Naive/Brute force approach i.e. calculate all possible routes and pick the shortest one. Again, provable thanks to the magic of CS.

Artificial Intelligence: WBW has covered this in detail so I will skip it. All I will add is that AI is actually REALLY simple, if you have a working foundation in statistics and a crap-ton of sample data with which to model your problem and a bulk of its outcomes.

IN SUM: CS is awesome. If you are considering it as a degree, or an online course or getting your kids involved early I say DO IT. Job demand is high, pay-scale is ridiculous and best of all, you can work in any field imaginable from alternative energy to design, to space exploration to banking and more.

If you feel like letting the procrastination monkey drive for a while, start on the wiki page for CS, and get lost in all it has to offer. Best decision I have ever made, unlimited prospects and best of all the ability to contribute to SOLUTIONS of the problems that the world faces today. Technology has the power to save us if you just hop along for the ride.

https://en.wikipedia.org/wiki/Computer_science

No Comments

Comment Form

Comment (*)

Field email is empty.

Name (*) Email (*) Website