Computer Fundamentals the Pace of Innovation Essay

Pages: 6 (1767 words)  ·  Bibliography Sources: 4  ·  File: .docx  ·  Level: College Senior  ·  Topic: Education - Computers

Computer Fundamentals

The pace of innovation across the series of technologies that comprise a personal computer continues to accelerate, often leading to product lifecycles that are eighteen months or less. The level of investment in Research & Development (R&D) in each of the areas assessed and analyzed in this paper continue to increase, often leading to higher levels of price/performance than succeeding generations of comparable systems (Dedrick, Kraemer, 2005). The intent of this analysis is to evaluate computer hardware and components, the operating system functions, how computers manage the input/output (I/O) process, in addition to the areas of multiprogramming and concurrent programming. The last two sections of the paper analyze hardware and memory management, and security.

Analysis of Computer Hardware

Download full
paper NOW!  ⬇️
One of the most critical concepts relating to the innovation and rapid advances in computer hardware is Moore's Law (Larus, 2009). Dr. Gordon Moore is one of the founders of Intel Corporation, and during the initial product generations of the microprocessor he also helped to invent, he noticed that every year up to a 40% increase in the number of transistors per chip unit could be achieved while reducing the cost by 50% or more (Larus, 2009). Moore's Law is what has driven the rapid growth in functionality of microprocessors, video and memory components, and the pervasive use of Ethernet-based chipsets that allow computers to communicate directly over the Internet. Microprocessors' levels of functions supported quickly progressed based on Moore's Law, which became a design objective within Intel Corporation and still is today (Pankratius, Schulte, Keutzer, 2011).

TOPIC: Essay on Computer Fundamentals the Pace of Innovation Across Assignment

Because of Moore's Law and the continual investment in R&D, Intel has been able to define the microprocessor market by innovating faster than its competitors innovate. The company's latest generation of Core i5 and i7 processors have been designed to support real-time video streaming on laptops and tablet-based systems and devices. In addition, Intel relies on Moore's Law to define their hyper-threading technology, which takes the lessons learned from multi-threaded operating systems of the past including Windows XP and Windows 7. Intel HD Graphics was also designed based on the advances the company made with their graphics accelerator technology within the last five years (Taylor, 2005). Microprocessor and chipset producers have also adopted the concepts of Moore's Law, and this concept dominates the industry today as a result. Form factors of PCs and laptops continually shift based on the advances made at the microprocessor and chipset levels. This allows for greater performance and portability of increasingly high performance systems without having to invest in expensive cabinets, cases or bulky systems. The pace of innovation is quickening as laptops and tablets gain greater computing power and sophistication over time, for the most part predicated on Moore's Law and the lessons learned from decades of development on microprocessors, graphics chipsets, and network chipsets. Intel is in all three of these businesses and continues to experience rapid growth as the i5 and i7 processors are now being adopted by Intel's customers into tablet PCs, laptops and high-end servers in anticipation of advances in operating systems, which is the next section of this paper.

Analysis of Operating Systems

The function, design and implementation of an operating system is what unifies the many hardware and software components together so the computer can operate (Boudreau, 2010). There are literally thousands of operating systems in existence and use today, as many are developed for a specific purpose or use within a business or for making internal computer systems more compatible with each other. The most dominant operating systems in use today include the many variations of Microsoft Windows, the UNIX operating system and its many variants including BSD and the Apple Macintosh System X. The open source operating system arena is the fastest growing with Linux and its Web-based variant, Google Chrome O.S. being dominant as well (Boudreau, 2010).

Despite these differences in the types of operating systems, all share a common set of functions. Each operating system by definition has a user interface, a kernel, which includes the core functions necessary for orchestrating the elements in the operating system to work together, in addition to networking and security components as well. As operating systems are more focused on the goal of designing in security, all aspects of the kernel are also now integrated directly to security-based protocols and functions (Funell, 2010).

The kernel of an operating system is where the major differences are found between Microsoft Windows, UNIX, and open-sourced-based Linux operating systems. The core functions of an operating system include support for devices through device driver code, definition of file systems structure and disk access routines and algorithms, and definition of memory management. The kernel also defines the programming models, how the overall system will handled software interrupts and how program execution at the byte ordering level will also be completed (Boudreau, 2010). How an operating system designer chooses to define each of these areas has a direct effect on operating system performance, scalability, security, and usability (Boudreau, 2010). The current state of design objectives in operating systems is putting a very high priority on security (Funell, 2010). This is such a high priority that the current generation of Microsoft operating systems has properties assigned to the process thread level, a depth of security functionality not seen in any previous operating system. This is to ensure any application process can be audited, started or stopped depending on the anticipated threat of an application. In conclusion, all aspects of operating systems are defined to support a coordinated, synchronized and coordinated response to application and user requests and requirements.

Analyzing I/O and file systems, file structures, naming and disk management

The I/O systems of any computer are defined by the operating system. The goal in designing an I/O system is to ensure the highest level of performance while maintaining the integrity of the data as well. Each of the dominant types of operating systems, from Microsoft Windows, to UNIX and open source including Linux, take a different approach to defining I/O as each has a completely different kernel architecture that defines the entire operating system's functionality (Boudreau, 2010). All however share a common attribute of having I/O that is synchronous and asynchronous, and each operating system can be configured to support networked I/O functions, either ad hoc or structured (Boudreau, 2010).

File systems also vary significantly across operating system and define the taxonomies of how data will be indexed, queried, stored and accessed. File systems and the corresponding file structures are deigned to be optimized for use with the specific kernel aspects of the operating system (Volkel, Haller, 2009). The Microsoft operating systems rely on the File Allocation Table (FAT) file system, which in turn has a direct implication on naming conventions for files and disk management as well (Volkel, Haller, 2009). The inclusion of the High Performance File System (HPFS) within the Windows operating systems is a direct result of security becoming a primary concern of enterprise or large business customers and a design objective as a result (Funell, 2010). All aspects of the I/O systems, file systems, file structures, naming and disk management revolve around optimizing the efficiency of the entire operating systems' performance.

Subtopic 4: Multiprogramming & concurrent programming, history, processes and scheduling; mutual exclusion, synchronization and communication

One of the most important functions of the kernel is to orchestrate the use of memory and the needs of applications to complete tasks, often referred to as processing threads (Boudreau, 2010). Multiprogramming and concurrent programming are used by operating systems to stay synchronized to the microprocessor and provide commands as to which task or thread to complete in which sequence. In this way, multiprogramming and concurrent programming serve as the foundation for optimizing processes and scheduling based on the systems' constraints (Volkel, Haller, 2009). Using multiprogramming and concurrent programming, Microsoft was able to create the first multithreaded, commercially successful operating system in Windows NT (Taylor, 2005). The lessons learned in that operating systems' development are what allow for multithreading of 64-bit applications on mobile devices that require very little power; an engineering design accomplishment not possible just three years ago (Boudreau, 2010). Multiprogramming, synchronization and communication are critically important for making an operating system optimized for its hardware constraints.

Subtopic 5: Hardware and Memory management: static relocation, virtual memory, segmentation, paging, load control, etc.

From the very first operating systems, the need for hardware and memory management was designed into the kernel, often with procedure calls and later with Application programmer Interfaces (APIs) that would allow for virtual memory definition and segmentation (Boudreau, 2010). The use of virtual memory management, paging and the optimization of memory is today managed in the Windows operating system at the processor thread level, and can be configured for pre-emptive vs. collaborative multitasking (Volkel, Haller, 2009). This refers to an operating systems' capability of managing the many interrupts to process requests and requirements while at the same time, and has led to the development of applications that can partition memory requirements to also… [END OF PREVIEW] . . . READ MORE

Two Ordering Options:

  1. To download this paper immediately, it takes only 2 minutes to subscribe.  You can individually download any of our 2,000,000+ private & exclusive papers, 24/7!  You'll also receive a permanent, 10% discount on custom writing.  (After you pay and log-in, the "Download Full Paper" link will instantly download any paper(s) that you wish!)
  2. One of our highly experienced experts will write a brand new, 100% unique paper matching the exact specifications and topic that you provide!  You'll be the only person on the planet to receive the one-of-a-kind paper that we write for you!  Use code "Save10" to save 10% on your 1st order!
1.  Download full paper (6 pages)⬇️

Download the perfectly formatted MS Word file!

- or -

2.  Write a NEW paper for me!✍🏻

We'll follow your exact instructions!
Chat with the writer 24/7.

Digital Forensics Thesis

Factors That Make Up an Information Society Essay

Change Management for Enterprise 2.0 Implementations Dissertation

New Media Essay

Information Security in Cloud Computing Platforms Research Paper

View 200+ other related papers  >>

How to Cite "Computer Fundamentals the Pace of Innovation" Essay in a Bibliography:

APA Style

Computer Fundamentals the Pace of Innovation.  (2011, January 14).  Retrieved January 16, 2022, from

MLA Format

"Computer Fundamentals the Pace of Innovation."  14 January 2011.  Web.  16 January 2022. <>.

Chicago Style

"Computer Fundamentals the Pace of Innovation."  January 14, 2011.  Accessed January 16, 2022.