Wednesday, September 21, 2016

Chapter 4

A Look At The Computer Side

Now that we have established more of an understanding of components that make up an information system let delve a little deeper into them.  The aspects that we are going to look at specifically is the hardware and software as well as a general glance at data.  We live in a time where these components change by the day and where their functionality and applicability do the same. 

Data

Data in simplest form is called binary which means it is either a 1 or a 0.  This single amount of data is called a bit.  You can think in simple terms as it being a light switch in either the on or off position when talking about binary.  Bits make up all other measurements of data but the scale is not something people outside of the tech industry are often familiar with.  It steps up to a byte which is 8 bits.  To put it into perspective when you see a character displayed on your screen it is safe to assume that it represents one byte of data.  If you look at this blog in its entirety you can image there is an astronomically large number of bits and even bytes here but the scale quickly adjusts itself to allow us to manage and quantify these numbers.  After bytes you have kilobytes which is equal to 1,024 bytes.  This is where it gets confusing for most people, the scale starts at 8 but never returns and instead continues to use 1,024 as the number it scales.  Next we have megabytes which is 1,024 kilobytes.  Then, we have gigabytes and terabytes which are the next steps.  Most people are still familiar with these units because they are commonly used when quantifying the amount of space on memory devices such as hard drives, SD cards and mobile devices.  Then it moves on to petabytes, exabytes and zettabytes.  These units are not so well known but they use the same scale so understanding them is not the issue.  Visualizing them however requires a new perspective.  For example, it is estimated that the National Security Agency's data center that is used to monitor and record communication traffic holds around 16 petabytes of data.  That is a lot of data!  Further, it is estimated that all internet traffic across the globe will be in the neighborhood of 1.6 zettabytes by the end of 2018.  While these units are not so common the modern world of big data has made their existence necessary.

Now that we understand quantifying data we need to quickly take a look at the volatility of data.  Depending on where the data is stored it may or may not cease to exist once power is removed from the storage device.  On a disk drive or hard drive the data does not disappear when the power is removed from the device because it is "etched" on a disk so it exist physically to an extent.  This makes it nonvolatile.  However, when the same data is stored in flash random access memory it is temporary and only exist while the device has electrical current flowing through it.  This is called volatile data.  When considering the build of programs and systems choosing how the data is managed in the system is important due to these differences.  Lets take a look a the hardware we were just talking about and how it plays a role.

Hardware

Hardware in general has the same or very similar components regardless of what class we are talking about.  The class can vary from a personal computer to a server system all the way back down to small mobile device. They will have a CPU or a central processing unit.  The CPU can be thought of as the brain of the device.  Its function is to perform arithmetic and logical comparisons and it stores those outcomes in one or more forms of the devices memory.  The "data" power of a processor is measured in hertz which quantifies the clock cycle by the number of alternations per second. Thanks to the power of modern processors you will see this displayed in the form of gigahertz.  One gigahertz is equal to 1,000,000,000 hertz.  The higher the number the faster the processor is.  Modern processors can feature multiple cores which allow it to efficiently divide tasks across itself physically.  This increases the efficacy of machines significantly thanks to the multiprocessing requirement of our modern world.  The process that the CPU performs are derived from instructions it receives on some form of software which we will talk more about.  That brings us to memory, we know that it can be volatile or nonvolatile but why is there a need for that?  Well the first and most common form of memory to the layman is the storage memory.  In a PC this would be the hard drive in a server it would be many, many hard drives that can be organized in various ways.  This is the long term storage for devices and everything from the operating system to the paper you wrote last semester can be found here.  There is another form of memory called main memory.  This takes the form of small silicone boards made up of groups of flash memory called RAM or random access memory.  The purpose of RAM is to allow quick access and storage of data by the processor for items that is currently working on or expects to access soon.  This data can take the form of pieces of software or code, or it can be the document you have up in your word processor but haven't made a safe location for yet.  If it is being displayed on your screen outside of looking though a file directory you can assume that it is in your RAM in one form or another.  Again, these forms remain constant across device types, what changes is their data capacity and power.

We can divide the types of hardware into fairly simple groups.  The most common and well known is the personal computer.  This describes a full computer that takes the form of either a desktop or a laptop.  The advent of modern mobile technology requires us to add to this as there are devices now that behave and perform the same functions as a PC but go to their own group.  Smartphones, e-readers and tablets are forms of mobile hardware.  The distinction here is that they are mobile for the long term and not as powerful.  All of these devices in these two groups probably communicate with our next group at one time or another.  Servers are large immobile devices that have the sole function of managing large amounts of data and facilitating the communication of other devices with that data.  When one machine connects to a server it is considered a client which is a simple technical term for on system in communication with another.  Server farms is where these are most commonly seen, these are buildings with thousands of individual servers.  However it is possible to create a server that takes the form in a standards PC tower.  It all depends on the scope of the task at hand.

Software

There several types of software that exist in the tech world.  The most basic form is the operating system or OS.  The OS is a program that controls a computers resources.  All other forms of software are either integrated into the OS or use it to interact with the hardware of the device.  An example of an OS is the ever prevalent Windows OS or Windows 10 in its current iteration.  Next we have applications.  These are split into two groups, native applications and web applications.  Native applications are designed to run on a particular operating system generally and are installed on the devices storage hardware.  These can exist in the form of common applications known ad horizontal-market applications.  This includes applications that are used across a wide range of industries such as Microsoft Word or Google Chrome.  There is also vertical-market applications which serve the need of  specific industry.  The software your mechanic uses for billing of your dentist uses for managing your information and records are examples.  These are often highly customizable as the specific needs for individual customers vary and their focused build allows for various features to be built-in.  Finally we have one-of-a-kind applications.  These would be purpose build by an organization to be used to preform very specific tasks.  Examples might include the software the NSA uses to manage and search its intelligence database.  These not as common as the other groups as their development is often very expensive and their need is not as prevalent.  Something the first two have in common and it is an important aspect for managers to understand is that when a customer pays for the software they are not buying the software itself rather they are buying a license to install and use the software.  The software itself i.e. the code is still owned by the company that produced it.  Open Source software is the exception to this rule.  Open Source software is developed by a community that generally works for free.  The source code or back bone of the program is avliable to the general public at no cost.  OpenOffice and the Linux operating system are examples of such forms of software.  Finally, we have firmware.  This is a form of software that is installed on devices themselves and are integrated in such a way they are essentially part of the device itself.  Printers, wireless headphones and MP3 players all need software to operate but these are not applications that can generally be changed.  Think of it as an OS for smaller devices.  

Components Summary

All of this information is quite general and it is important to keep in mind that in the modern tech world it can change overnight.  There are forms of hardware today that are beating down long known stats quos such as 3D printing, self driving cars and the all inclusive internet of things.  The same thing is happening with the software virtualization allowing us to easily move and manipulate the software component on one machine to that of another or on another.  In the tech world it is quite regular to see a software engineer test their Linux code for a server farm on their Windows PC.  Modern virtualization software allows them to "create" a virtual version of a Linux OS on their desktop.  This has even led to the virtualization of individual personal desktops allowing users to access the full range of their PC remotely from any PC in the world.  I imagine that the changes these pioneering systems will bring to the MIS world will be wide and sweeping.  I personally can't wait to see whats next.  

No comments:

Post a Comment