METAL GEAR RISING: REVENGEANCE – Gameplay Basics for Beginners

METAL GEAR RISING: REVENGEANCE – Gameplay Basics for Beginners 1 - steamlists.com
METAL GEAR RISING: REVENGEANCE – Gameplay Basics for Beginners 1 - steamlists.com

Table of Contents

A thorough guide on playing Metal Gear Rising: Revengeance.

You will learn everything from how to walk, move the camera to doing perfect parries on every enemy in the game.

This guide is for everyone, from a total beginner to a seasoned veteran.

Before the guide

————————————————————————————————

Video games can be easy and hard. This is also true in Metal Gear Rising Revengeance with difficulties ranging from easy to revengeance, the hardest difficulty in the game.

This game has tons of mechanics that can be confusing to new players. That is why this guide was made. You are going to learn everything that this game can offer. After this guide you will have the experience to play the game on the hardest difficulty.

Before we learn how to play the game, we first must understand the technology we are using to play it. The computer. Without understanding how it works we will have difficulty playing the game. Worry not as this guide will explain everything in great detail.

————————————————————————————————

METAL GEAR RISING: REVENGEANCE - Gameplay Basics for Beginners - Before the guide - D5EB580

The history of computers

19TH CENTURY

1801: Joseph Marie Jacquard, a French merchant and inventor invents a loom that uses punched wooden cards to automatically weave fabric designs. Early computers would use similar punch cards.

1821: English mathematician Charles Babbage conceives of a steam-driven calculating machine that would be able to compute tables of numbers. Funded by the British government, the project, called the “Difference Engine” fails due to the lack of technology at the time, according to the University of Minnesota.

1848: Ada Lovelace, an English mathematician and the daughter of poet Lord Byron, writes the world’s first computer program. According to Anna Siffert, a professor of theoretical mathematics at the University of Münster in Germany, Lovelace writes the first program while translating a paper on Babbage’s Analytical Engine from French into English. “She also provides her own comments on the text. Her annotations, simply called “notes,” turn out to be three times as long as the actual transcript,” Siffert wrote in an article for The Max Planck Society. “Lovelace also adds a step-by-step description for computation of Bernoulli numbers with Babbage’s machine — basically an algorithm — which, in effect, makes her the world’s first computer programmer.” Bernoulli numbers are a sequence of rational numbers often used in computation.

1853: Swedish inventor Per Georg Scheutz and his son Edvard design the world’s first printing calculator. The machine is significant for being the first to “compute tabular differences and print the results,” according to Uta C. Merzbach’s book, “Georg Scheutz and the First Printing Calculator” (Smithsonian Institution Press, 1977).

1890: Herman Hollerith designs a punch-card system to help calculate the 1890 U.S. Census. The machine, saves the government several years of calculations, and the U.S. taxpayer approximately $5 million, according to Columbia University Hollerith later establishes a company that will eventually become International Business Machines Corporation (IBM).

EARLY 20TH CENTURY

1931: At the Massachusetts Institute of Technology (MIT), Vannevar Bush invents and builds the Differential Analyzer, the first large-scale automatic general-purpose mechanical analog computer, according to Stanford University.

1936: Alan Turing, a British scientist and mathematician, presents the principle of a universal machine, later called the Turing machine, in a paper called “On Computable Numbers…” according to Chris Bernhardt’s book “Turing’s Vision” (The MIT Press, 2017). Turing machines are capable of computing anything that is computable. The central concept of the modern computer is based on his ideas. Turing is later involved in the development of the Turing-Welchman Bombe, an electro-mechanical device designed to decipher Nazi codes during World War II, according to the UK’s National Museum of Computing(opens in new tab).

1937: John Vincent Atanasoff, a professor of physics and mathematics at Iowa State University, submits a grant proposal to build the first electric-only computer, without using gears, cams, belts or shafts.

1939: David Packard and Bill Hewlett found the Hewlett Packard Company in Palo Alto, California. The pair decide the name of their new company by the toss of a coin, and Hewlett-Packard’s first headquarters are in Packard’s garage, according to MIT.

1941: German inventor and engineer Konrad Zuse completes his Z3 machine, the world’s earliest digital computer, according to Gerard O’Regan’s book “A Brief History of Computing(opens in new tab)” (Springer, 2021). The machine was destroyed during a bombing raid on Berlin during World War II. Zuse fled the German capital after the defeat of Nazi Germany and later released the world’s first commercial digital computer, the Z4, in 1950, according to O’Regan.

1941: Atanasoff and his graduate student, Clifford Berry, design the first digital electronic computer in the U.S., called the Atanasoff-Berry Computer (ABC). This marks the first time a computer is able to store information on its main memory, and is capable of performing one operation every 15 seconds, according to the book “Birthing the Computer” (Cambridge Scholars Publishing, 2016)

1945: Two professors at the University of Pennsylvania, John Mauchly and J. Presper Eckert, design and build the Electronic Numerical Integrator and Calculator (ENIAC). The machine is the first “automatic, general-purpose, electronic, decimal, digital computer,” according to Edwin D. Reilly’s book “Milestones in Computer Science and Information Technology” (Greenwood Press, 2003).

1946: Mauchly and Presper leave the University of Pennsylvania and receive funding from the Census Bureau to build the UNIVAC, the first commercial computer for business and government applications.

1947: William Shockley, John Bardeen and Walter Brattain of Bell Laboratories invent the transistor. They discover how to make an electric switch with solid materials and without the need for a vacuum.

1949: A team at the University of Cambridge develops the Electronic Delay Storage Automatic Calculator (EDSAC), “the first practical stored-program computer,” according to O’Regan. “EDSAC ran its first program in May 1949 when it calculated a table of squares and a list of prime numbers,” O’Regan wrote. In November 1949, scientists with the Council of Scientific and Industrial Research (CSIR), now called CSIRO, build Australia’s first digital computer called the Council for Scientific and Industrial Research Automatic Computer (CSIRAC). CSIRAC is the first digital computer in the world to play music, according to O’Regan.

Continuation of the history

LATE 20TH CENTURY

1953: Grace Hopper develops the first computer language, which eventually becomes known as COBOL, which stands for COmmon, Business-Oriented Language according to the National Museum of American History(opens in new tab). Hopper is later dubbed the “First Lady of Software” in her posthumous Presidential Medal of Freedom citation. Thomas Johnson Watson Jr., son of IBM CEO Thomas Johnson Watson Sr., conceives the IBM 701 EDPM to help the United Nations keep tabs on Korea during the war.

1954: John Backus and his team of programmers at IBM publish a paper describing their newly created FORTRAN programming language, an acronym for FORmula TRANslation, according to MIT(opens in new tab).

1958: Jack Kilby and Robert Noyce unveil the integrated circuit, known as the computer chip. Kilby is later awarded the Nobel Prize in Physics for his work.

1968: Douglas Engelbart reveals a prototype of the modern computer at the Fall Joint Computer Conference, San Francisco. His presentation, called “A Research Center for Augmenting Human Intellect” includes a live demonstration of his computer, including a mouse and a graphical user interface (GUI), according to the Doug Engelbart Institute(opens in new tab). This marks the development of the computer from a specialized machine for academics to a technology that is more accessible to the general public.

1969: Ken Thompson, Dennis Ritchie and a group of other developers at Bell Labs produce UNIX, an operating system that made “large-scale networking of diverse computing systems — and the internet — practical,” according to Bell Labs(opens in new tab).. The team behind UNIX continued to develop the operating system using the C programming language, which they also optimized.

1970: The newly formed Intel unveils the Intel 1103, the first Dynamic Access Memory (DRAM) chip.

1971: A team of IBM engineers led by Alan Shugart invents the “floppy disk,” enabling data to be shared among different computers.

1972: Ralph Baer, a German-American engineer, releases Magnavox Odyssey, the world’s first home game console, in September 1972 , according to the Computer Museum of America(opens in new tab). Months later, entrepreneur Nolan Bushnell and engineer Al Alcorn with Atari release Pong, the world’s first commercially successful video game.

1973: Robert Metcalfe, a member of the research staff for Xerox, develops Ethernet for connecting multiple computers and other hardware.

1977: The Commodore Personal Electronic Transactor (PET), is released onto the home computer market, featuring an MOS Technology 8-bit 6502 microprocessor, which controls the screen, keyboard and cassette player. The PET is especially successful in the education market, according to O’Regan.

1975: The magazine cover of the January issue of “Popular Electronics” highlights the Altair 8080 as the “world’s first minicomputer kit to rival commercial models.” After seeing the magazine issue, two “computer geeks,” Paul Allen and Bill Gates, offer to write software for the Altair, using the new BASIC language. On April 4, after the success of this first endeavor, the two childhood friends form their own software company, Microsoft.

1976: Steve Jobs and Steve Wozniak co-found Apple Computer on April Fool’s Day. They unveil Apple I, the first computer with a single-circuit board and ROM (Read Only Memory), according to MIT.

1977: Radio Shack began its initial production run of 3,000 TRS-80 Model 1 computers — disparagingly known as the “Trash 80” — priced at $599, according to the National Museum of American History. Within a year, the company took 250,000 orders for the computer, according to the book “How TRS-80 Enthusiasts Helped Spark the PC Revolution(opens in new tab)” (The Seeker Books, 2007).

1977: The first West Coast Computer Faire is held in San Francisco. Jobs and Wozniak present the Apple II computer at the Faire, which includes color graphics and features an audio cassette drive for storage.

1978: VisiCalc, the first computerized spreadsheet program is introduced.

1979: MicroPro International, founded by software engineer Seymour Rubenstein, releases WordStar, the world’s first commercially successful word processor. WordStar is programmed by Rob Barnaby, and includes 137,000 lines of code, according to Matthew G. Kirschenbaum’s book “Track Changes: A Literary History of Word Processing(opens in new tab)” (Harvard University Press, 2016).

1981: “Acorn,” IBM’s first personal computer, is released onto the market at a price point of $1,565, according to IBM. Acorn uses the MS-DOS operating system from Windows. Optional features include a display, printer, two diskette drives, extra memory, a game adapter and more.

1983: The Apple Lisa, standing for “Local Integrated Software Architecture” but also the name of Steve Jobs’ daughter, according to the National Museum of American History (NMAH), is the first personal computer to feature a GUI. The machine also includes a drop-down menu and icons. Also this year, the Gavilan SC is released and is the first portable computer with a flip-form design and the very first to be sold as a “laptop.”

1984: The Apple Macintosh is announced to the world during a Superbowl advertisement. The Macintosh is launched with a retail price of $2,500, according to the NMAH.

1985: As a response to the Apple Lisa’s GUI, Microsoft releases Windows in November 1985, the Guardian reported. Meanwhile, Commodore announces the Amiga 1000.

1989: Tim Berners-Lee, a British researcher at the European Organization for Nuclear Research (CERN), submits his proposal for what would become the World Wide Web. His paper details his ideas for Hyper Text Markup Language (HTML), the building blocks of the Web.

1993: The Pentium microprocessor advances the use of graphics and music on PCs.

1996: Sergey Brin and Larry Page develop the Google search engine at Stanford University.

1997: Microsoft invests $150 million in Apple, which at the time is struggling financially. This investment ends an ongoing court case in which Apple accused Microsoft of copying its operating system.

1999: Wi-Fi, the abbreviated term for “wireless fidelity” is developed, initially covering a distance of up to 300 feet (91 meters) Wired reported.

Continuation of the continuation of the history

21ST CENTURY

2001: Mac OS X, later renamed OS X then simply macOS, is released by Apple as the successor to its standard Mac Operating System. OS X goes through 16 different versions, each with “10” as its title, and the first nine iterations are nicknamed after big cats, with the first being codenamed “Cheetah,” TechRadar reported.(opens in new tab)

2003: AMD’s Athlon 64, the first 64-bit processor for personal computers, is released to customers.

2004: The Mozilla Corporation launches Mozilla Firefox 1.0. The Web browser is one of the first major challenges to Internet Explorer, owned by Microsoft. During its first five years, Firefox exceeded a billion downloads by users, according to the Web Design Museum(opens in new tab).

2005: Google buys Android, a Linux-based mobile phone operating system

2006: The MacBook Pro from Apple hits the shelves. The Pro is the company’s first Intel-based, dual-core mobile computer.

2009: Microsoft launches Windows 7 on July 22. The new operating system features the ability to pin applications to the taskbar, scatter windows away by shaking another window, easy-to-access jumplists, easier previews of tiles and more, TechRadar reported(opens in new tab).

2010: The iPad, Apple’s flagship handheld tablet, is unveiled.

2011: Google releases the Chromebook, which runs on Google Chrome OS.

2015: Apple releases the Apple Watch. Microsoft releases Windows 10.

2016: The first reprogrammable quantum computer was created. “Until now, there hasn’t been any quantum-computing platform that had the capability to program new algorithms into their system. They’re usually each tailored to attack a particular algorithm,” said study lead author Shantanu Debnath, a quantum physicist and optical engineer at the University of Maryland, College Park.

2017: The Defense Advanced Research Projects Agency (DARPA) is developing a new “Molecular Informatics” program that uses molecules as computers. “Chemistry offers a rich set of properties that we may be able to harness for rapid, scalable information storage and processing,” Anne Fischer, program manager in DARPA’s Defense Sciences Office, said in a statement. “Millions of molecules exist, and each molecule has a unique three-dimensional atomic structure as well as variables such as shape, size, or even color. This richness provides a vast design space for exploring novel and multi-value ways to encode and process data beyond the 0s and 1s of current logic-based, digital architectures.”

How do modern computers work?

How does a computer work

When a person is newly introduced to a computer system, a curiosity develops in the mind that how this machine works actually, how it understands my words, and put results as soon my eye blinks. All such questions arise when we are not having knowledge about the Computer background. Here, we will let you know all the answers to your curious mind and discuss the computer system’s working process.

What is a Computer

Initially, as a new user, one should be introduced with the machine, which is known as Computer. So, a computer is an electronic device that requires a power supply to work. Power Supply is the lifeline of a computer as water is the lifeline for a human body. A computer machine is used to process the information provided by us. It takes information or data from one end, store it to process, and finally, after completing the processing, it output the result on the other hand. The information it takes at one end is known as Computer Input, and the result that it provides after processing is known as Computer Output. The place where it stores the information is known as Computer Memory or RAM (Random Access Memory). A computer system stores information in bits. Bits is the smallest storage unit of a computer.

Major Components of a Computer

A computer system works by combining input, storage space, processing, and output. These four are the major components of a Computer.

Let’s understand one by one:

  • Input: An input is the information that we provide to the Computer. We provide the information using the Computer’s input devices: Keyboard, mouse, microphone, and many more. For example, when we type something using a keyboard, it is known as an Input provided to the Computer.
  • Storage Space: It is the place where our input gets stored. It is known as Computer Memory that keeps the data into it. A computer uses a hard drive for storing files and documents. It uses two types of memory, i.e., internal memory and external memory. Internal memory is known as RAM, which is volatile in nature. It stores data temporarily, i.e., when the data is ready to be processed, is loaded into RAM, and after processing, it moves data for the storage. On the other hand, external memory is used to store data permanently until you remove it or it got crashed.
  • Processing: The processing of the input is performed by the CPU, which is the Central Processing Unit of the Computer. It is also known as the brain of a computer that is responsible for processing the data provided by the user. The speed of the computer brain is four times faster than the speed of the human brain.
  • Output: When we type something using a keyboard, the place where we see the typed input is the Computer Monitor or Computer Screen. A computer screen allows seeing the input we provided to the Computer. Including this, there are different types of output devices of a computer, such as loudspeakers, projectors, printers, and many more.

These all play a vital role in the working of a computer system.

Hardware and Software

The input and output devices that can be physically touched are known as Hardware of the system. Such as keyboard, mouse, screen, etc. The applications that reside in the Computer and can only see them but cannot touch them are known as Software. Such as Microsoft Word, Excel, Paint, and all the installed software on the system.

How does it all make a computer to work

These major components of a computer system together enable a computer to work.

  • Operating System.
  • The system’s booting process begins that load the operating system (Windows, Linux, Mac, etc.) with all associated files. The bootstrap loader starts the booting of the system. So, in this way, Windows and its other essential services get loaded to the system.
  • As the operating system has been loaded to the Computer, the installed Hardware of the systems becomes active and able to communicate with the CPU. The communication of the hardware devices is performed via an interrupt request (IRQ). When the current task is already in execution, the interrupt controller sends the request to CPU to stop processing a new hardware request until the execution of the current task gets completed. The CPU keeps the new request on hold, and that process is stored as a memory address in the memory stack. When the current task execution is finished, the task on hold is resumed and processed.
  • However, if the Computer fails in the POST test, an irregular POST is encountered. We can understand the irregular POST when we hear a beep coming from the system that notifies us that some problem has occurred.
  • When we switch on the computer system by pressing the power button, a signal reaches the power supply that converts the alternate current into the direct current, also known as DC. After that, a proper ample power is supplied to each component of the Computer.
  • With no issues, all components come in their active state, the power supply sends a signal to the motherboard and CPU via transistors. During the time, the processor removes the leftover data in the memory, and the CPU becomes ready to take over the instruction (input) and process it.
  • A POST (power-on self-test) is performed on the Computer in a sequence to ensure that the major computer components exist and work properly. When the Computer passes the test, firstly, the 64-bytes memory wakes because it carries the system time and date information and all other hardware-related information installed on the system. This information starts loading and POST checks and compares this information with the system settings. If compared successfully, it loads the basic drivers (that allows communication of hardware devices with CPU and Computer to continue to boot) and interrupts handlers for the installed Hardware such as a keyboard, hard drive, mouse, and many more.
  • After that, POST checks the display adapter, and with no issues found, it loads the display that we see on the computer monitor. Next, it is checked that whether Cold boot or reboot (warm boot) is performed by looking at the memory address 0000:0472. If it is 1234h, it means it is a reboot, and the rest of the POST steps are skipped. But, if it not so, it means it is a cold boot, and the remaining POST steps are continued.
  • Now, the RAM installed on the computer system is checked.

 

A thorough look into the parts

How Does a CPU Work?

The CPU of a computer, or central processing unit, is frequently compared to the human brain since it’s the central control of the computer. The CPU performs computer operations by rapidly executing program instructions. The speed of the CPU plays a large part in determining the power of a computer. Each new generation of microprocessors features a more powerful CPU that can execute instructions more quickly than the previous generation.

How a Computer Processor Works

The working of the CPU is defined as a three-step process. First, an instruction is fetched from memory. Second, the instruction is decoded and the processor figures out what it’s being told to do. Third, the instruction is executed and an operation is performed. These three steps repeat in a cycle that begins again with the CPU fetching the next instruction. The steps are referred to as the instruction cycle of the CPU.

The CPU uses a program counter to keep track of which instruction to fetch next. The counter is the address of the memory location that holds the next instruction to be executed. It’s stored in a register, which is a dedicated memory location in the CPU itself. The program counter is incremented to point to the next instruction after each fetch in the instruction cycle.

Operations Performed by a CPU

The CPU executes instructions that perform a set of basic operations. There are arithmetic operations like addition, subtraction, multiplication and division. Memory operations move data from one location to another. Logical operations test a condition and make a decision based on the result. Control operations affect other components of the computer. These basic types of operations, executed very quickly, allow a computer to perform a wide range of functions. The exact number of operations supported by a CPU depends on its architecture.

How the CPU Uses Memory

Computer memory refers to the area where data and programs are stored. Memory is not part of the CPU, but the CPU must interact closely with it. There are two types of computer memory: primary, or main, and secondary. The CPU relies heavily on main memory for storing program instructions and the data the instructions operate on. Main memory is temporary in nature and only holds instructions and data for a program while the program is executing. Secondary memory is the more permanent storage provided by hard drives and flash drives.

A component of the CPU known as the control unit is responsible for moving instructions and data from secondary storage into main memory prior to instruction execution. The control unit also moves the results of an instruction to secondary storage.

What Are the Functions of a CPU in a Computer?

People frequently describe a computer’s CPU in terms of the human brain. This is an apt analogy since the CPU (central processing unit) controls computer operation. It does this by executing instructions provided by computer programs on data that come from a variety of sources.

What Is the Function of a CPU?

The purpose of every computer is some form of data processing. The CPU supports data processing by performing the functions of fetch, decode and execute on programmed instructions. Taken together, these functions are frequently referred to as the instruction cycle. In addition to the instruction cycle functions, the CPU performs fetch and write functions on data.

CPU Instruction Cycle Functions

When a program runs on a computer, instructions are stored in computer memory until they’re executed. The CPU uses a program counter to fetch the next instruction from memory, where it’s stored in a format known as assembly code. The CPU decodes the instruction into binary code that can be executed. Once this is done, the CPU does what the instruction tells it to, either performing an operation, fetching or storing data or adjusting the program counter to jump to a different instruction.

The types of operations that typically can be performed by the CPU include simple math functions like addition, subtraction, multiplication and division. The CPU can also perform comparisons between data objects to determine if they’re equal. All the amazing things that computers can do are performed with these and a few other basic operations. After an instruction is executed, the next instruction is fetched and the cycle continues.

CPU Data Functions

While performing the execute function of the instruction cycle, the CPU may be asked to execute an instruction that requires data. For example, executing an arithmetic function requires the numbers that will be used for the calculation. To deliver the necessary data, there are instructions to fetch data from memory and write data that has been processed back to memory. The instructions used by the CPU and the data it operates on are stored in the same memory area. Unique addresses are used by the CPU to keep track of different memory locations.

Microprocessor CPUs

The microprocessor of a personal computer is a chip that contains all the circuitry necessary to control computer operations. It allows every function of the CPU to be executed by a single chip that is cheaper to manufacture and more reliable due to the use of integrated circuits. Prior to the introduction of microprocessors, a computer’s CPU was contained on a circuit board that contained multiple chips connected by integrated circuits. Today, many modern processors have several CPUs on the same chip, referred to as cores.

Continuation of the previous chapter

Parts of a Microprocessor

A microprocessor, or central processing unit (CPU), is an internal hardware component that performs the mathematical calculations required for computers to run programs and execute commands. Processors are usually made of silicon material that contains tiny electrical components embedded on the surface. Typical computer programs that must be processed by CPUs include Internet browsers, games and video editing software.

Arithmetic Logic Unit

Arithmetic logic units (ALUs) in microprocessors allow computers to add, subtract, multiply, divide and perform other logical operations at high speeds. Thanks to advanced ALUs, modern microprocessors and GPUs (graphics processing units) are able to perform very complicated operations on large floating-point numbers.

Cache Memory

Cache memory is an area on the CPU where copies of common instructions required to perform functions and run programs are stored temporarily. Since the processor has its own smaller, faster cache memory, it can process data more quickly than reading and writing to the main system memory. Types of microprocessor memory include ROM (read-only) and RAM (random-access).

Transistors

Basically, transistors are semiconductor devices used to switch electronic signals. In microprocessors, a higher number of transistors means a better performing CPU. For example, Intel Pentium 4 processors have around 40 to 50 million transistors, while older Pentium 3 CPUs have 9.5 million. More transistors allow for pipelining and multiple instructions decoders, which allows several processes to be completed during every clock cycle.

Control Signals

Control signals are electronic signals that control the processor components being used to perform an operation or execute an instruction. An element called a “sequencer” sends control signals to tell the specific unit what it needs to do next. For example, a read or write signal may be sent to the cache memory letting it know that the processor is getting ready to read or write data into processor memory.

Instruction Set and Registers

The group of instructions a processor can execute are called its “instruction set.” The instruction set determines things such as the type of programs a CPU can work with. Registers are small memory locations that also contain instructions. Unlike regular memory locations, registers are referred to by a name instead of a number. For example, the IP (instruction pointer) contains the location of the next instruction, and the “accumulator” is where the processor stores the next value it plans to work on.

Basic Components of Microprocessors

Microprocessors perform millions of commands and calculations per second.

Intel introduced the first microprocessor in 1971 and called it the 4004 chip. Today’s microprocessors, with dimensions smaller than a dime, offer more power and capabilities. The center of the computer, the central processing unit (CPU) consists of one or more microprocessors. Manufactured from a silicone chip that contains millions of transistors, microprocessors move data from one memory address to another location. The CPUs make decisions and then move on to work on new instructions and calculations.

Arithmetic and Logic Unit

The “arithmetic and logic unit” (ALU) performs math computations, such as subtraction, addition, division and Boolean functions. Boolean functions are a type of logic used for circuit designs. The ALU also executes comparisons and logic testing. The processor transmits signals to the ALU, which interprets the instructions and performs the calculations.

Registers

Microprocessors have temporary data holding places called registers. These memory areas maintain data, such as computer instructions, storage addresses, characters and other data. Some computer instructions may require the use of certain registers as part of a command. Each register has a specific function, such as instruction register, program counter, accumulator and memory address register. For example, a program register holds the address of instructions taken from random access memory.

Control Unit

Control units (CUs) receive signals from the CPU, which instructs the control unit to move data from microprocessor to microprocessor. The control unit also directs the arithmetic and logic unit. Control units consist of multiple components, such as decoder, clock and control logic circuits. Working together, these devices transmit signals to certain locations on the microprocessor.

For example, the decoder receives commands from an application. The decoder interprets the instructions and takes an action. It sends signals to the ALU or directs registers to perform specific tasks. The control logic unit transmits signals to different sections of the microprocessor and registers, which informs these components to execute actions. The clock sends signals that synchronize and ensure timely execution of commands and processes.

Buses

Microprocessors have a system of buses, which move data. Buses refer to classifications of wiring that have specific tasks and functions. The data bus transfers data between the central processing unit and random access memory (RAM) — the computer’s primary memory. The control bus sends information necessary to coordinate and control multiple tasks. The address bus transmits the address between the CPU and the RAM for data being processed.

Cache Memory

Some advanced microprocessors have memory caches, which retain the last data used by the CPU. Memory caches speed up the computing process, because the CPU does not have to go to the slower RAM to retrieve data. Many computers have level 1 or level 2 caches; some systems have level 3 caches. The cache level indicates the order in which the CPU checks for data, starting with level 1. Manufacturers often integrate level 2 and level 3 caches into the microprocessor, which enhances processing speed.

How does a graphics card work?

The graphics processing unit (GPU) is a vital piece of hardware. Without it, you wouldn’t be able to play games, watch movies, or even flick through a Powerpoint presentation. So, what is a graphics card, and does it actually work?

What Is a Graphics Card?

So, when someone says “graphics card,” they’re referring to the GPU—the graphics processing unit. Like the motherboard in your computer, the graphics card is also a printed circuit board. It comes with a specific set of instructions to follow, and when it comes to standalone (known as discreet) GPUs, it’ll also come with fans, onboard RAM, its own memory controller, BIOS, and other features.

While graphics cards can come in all different shapes and sizes, there are two main types:

  • Integrated: An integrated GPU is built directly into the same housing as the CPU or an SoC. The vast majority of Intel CPUs come with integrated graphics, though it’s a bit hit and miss with AMD’s CPUs. Integrated graphics are useful for some modest gaming, web browsing, email, and potentially watching videos. They’re also less power-hungry than a discreet GPU.
  • Discreet: A discreet GPU is one separate from the CPU, added to an expansion slot found on the motherboard. A discreet GPU will deliver more power than an integrated GPU and can be used for high-level gaming, video editing, 3D model rendering, and other computationally intensive tasks. Some modern GPUs require hundreds of watts to run.
  • A modern, discreet GPU will typically outperform an integrated GPU, but you do have to take CPU and GPU generations into consideration. If you’re comparing hardware produced in the same era, the discreet GPU will win out. It simply has more processing power and more cooling available to process complex tasks.

 

What Components Does a Graphics Card Have?

Specific hardware varies between graphic card models, but most modern, discreet GPUs have the following components:

  • GPU: The GPU is an actual hardware component, similar to a CPU
  • Memory: Also known as VRAM, the graphics card comes with dedicated memory to assist operations
  • Interface: Most GPUs use PCI Express, found at the bottom of the card
  • Outputs: You’ll find various video outputs, often comprising HDMI, DisplayPort, DVI, or VGA
  • Fans/Heat Sink: All GPUs come with fans and a heat sink to help dissipate heat build-up during usage
  • Power Connectors: Modern GPUs require a six or eight-pin power connector, sometimes even requiring two or three
  • BIOS: The GPU BIOS holds initial setup and program information, retaining data on voltages, memory, and more when you power down your machine

 

How Does a Graphics Card Work?

A graphics card is primarily responsible for rendering images on a display, be that photos, videos, games, documents, your regular desktop environment, a file folder, and anything else. All of these things, from tasks that require tremendous computing power, like a video game, to something we deem “simple” like opening a new text document all require the use of a graphics card.

Expanding on this a little, your graphics card maps the instructions issued by the other programs on your computer into a visual rendering on your screen. But, a modern graphics card is capable of processing a phenomenal number of instructions simultaneously, drawing and redrawing images tens or even hundreds of times every second to ensure whatever you’re looking at, whatever tasks you’re attempting to complete remains smooth.

So, the CPU sends information regarding what needs to appear on screen to the graphics card. In turn, the graphics card takes those instructions and runs them through its own processing unit, rapidly updating its onboard memory (known as VRAM) as to which pixels on the screen need changing and how. This information then whizzes from your graphics card to your monitor (via a cable, of course), where the images, lines, textures, lighting, shading, and everything else changes.

If done well, and the graphics card and other computer components aren’t pushed to perform actions outside their capabilities, it looks like magic. The above description is very, very basic. There is a lot more going on under the surface, but that’s a rough overview of how a graphics card works.

How do SSDs work?

For the sake of this guide, we will only look into how SSDs instead of hard drives.

Understanding Computers and Memory

To understand how SSDs work and why they’re so useful, it is best to understand how computer memory works. A computer’s memory architecture is broken down into three aspects:

  • The cache
  • The memory
  • The data drive

Each of these aspects serves an important function that determines how they operate.

The cache is the innermost memory unit. When running, your computer uses the cache as a sort of playground for data calculations and procedures. The electrical pathways to the cache are the shortest, making data access almost instantaneous. However, the cache is very small, so its data is constantly being overwritten.

The memory is the middle ground. You may know it as RAM (Random Access Memory). This is where your computer stores data related to the programs and processes that are actively running. Access to RAM is slower than access to the cache, but only negligibly so.

The data drive is where everything else is stored for permanence. It’s where all of your programs, configuration files, documents, music files, movie files, and everything else is kept. When you want to access a file or run a program, the computer needs to load it from the data drive and into RAM.

The important thing to know is that there’s a vast speed difference between the three. While cache and RAM operate at speeds in nanoseconds, a traditional hard disk drive operates at speeds in milliseconds. In essence, the data drive is the bottleneck: no matter how fast everything else is, a computer can only load and save data as fast as the data drive can handle it.

This is where SSDs step in. While traditional HDDs are orders of magnitude slower than cache and RAM, SSDs are much faster, significantly reducing the amount of time it takes to load various programs and processes. Simply put, an SSD will make your computer feel much faster.

How Do Solid-State Drives Work?

SSDs serve the same purpose as HDDs: they store data and files for long-term use. The difference is that SSDs use a type of memory called “flash memory,” which is similar to RAM. But, unlike RAM, which clears its data whenever the computer powers down, the data on an SSD persists even when it loses power.

If you took apart a typical HDD, you’d see a stack of magnetic plates with a reading needle—kind of like a vinyl record player. Before the needle can read or write data, the plates have to spin around to the right location.

Whereas SSDs use a grid of electrical cells to send and receive data quickly. These grids are separated into sections called “pages,” and these pages are where data is stored. Pages are clumped together to form “blocks.” Furthermore, SSDs are called “solid-state” because they have no moving parts.

Why is this necessary to know? Because SSDs can only write to empty pages in a block. In HDDs, data can be written to any location on the plate at any time, and that means that data can be easily overwritten. SSDs can’t directly overwrite data in individual pages. They can only write data to empty pages in a block.

So then, how do SSDs handle data deletion? When enough pages in a block are marked as unused, the SSD commits the entire block’s worth of data to memory, erases the entire block, then re-commits the data from memory back to the block while leaving the unused pages blank. Note that erasing a block doesn’t necessarily mean the data is fully gone, but you can still securely delete data on an SSD.

However, the consequence of how SSDs operate means that your SSD will become slower over time.

When you have a fresh SSD, it’s loaded entirely with blocks full of blank pages. When you write new data to the SSD, it can immediately write to those blank pages with blazing speeds. However, as more and more data gets written, the blank pages run out, and you’re left with random unused pages scattered throughout the blocks.

Since an SSD can’t directly overwrite an individual page, every time you want to write new data from that point on, the SSD needs to:

  • Find a block with enough pages marked “unused”
  • Record which pages in that block are still necessary
  • Reset every page in that block to blank
  • Rewrite the necessary pages into the freshly reset block
  • Fill the remaining pages with the new data

So, in essence, once you’ve gone through all of the blank pages from a new SSD purchase, your drive will have to go through this process whenever it wants to write new data. This is how most flash memory works.

That said, it’s still much faster than a traditional HDD, and the speed gains are absolutely worth the purchase of an SSD over an HDD.

The Downside to Solid-State Drives

Now that we know how a solid-state drive works, we can also understand one of its biggest downsides: flash memory can only sustain a finite number of writes before it dies.

There is a lot of science explaining why this happens but suffice it to say that as an SSD is used, the electrical charges within each of its data cells must be periodically reset. Unfortunately, the electrical resistance of each cell increases slightly with every reset, which increases the voltage necessary to write into that cell. Eventually, the required voltage becomes so high that the particular cell becomes impossible to write to.

Thus, SSD data cells have a finite number of writes. However, that doesn’t mean an SSD won’t last a long time!

How does RAM work?

Random access memory (RAM) is the best-known form of computer memory. This is what allows your computer to surf the internet and then quickly switch to loading an application or editing a document. RAM is considered “random access” because you can access any memory cell directly if you know the row and column that intersect at that cell.

In contrast, serial access memory (SAM) stores data as a series of memory cells that can only be accessed sequentially (like a cassette tape). If the data is not in the current location, each memory cell is checked until the needed data is found. SAM works very well for memory buffers, where the data is normally stored in the order in which it will be used (for instance, the texture buffer memory on a video card). RAM data, on the other hand, can be accessed in any order.

RAM is basically your computer’s short-term memory. Similar to a microprocessor, a memory chip is an integrated circuit (IC) made of millions of transistors and capacitors. In the most common form of computer memory, dynamic random access memory (DRAM), a transistor and a capacitor are paired to create a memory cell, which represents a single bit of data. The capacitor holds the bit of information — a 0 or a 1 (see How Bits and Bytes Work for information on bits). The transistor acts as a switch that lets the control circuitry on the memory chip read the capacitor or change its state.

A capacitor is like a small bucket that can store electrons. To store a 1 in the memory cell, the bucket is filled with electrons. To store a 0, it is emptied. The problem with the capacitor’s bucket is that it has a leak. In a matter of a few milliseconds a full bucket becomes empty. Therefore, for dynamic memory to work, either the CPU or the memory controller has to come along and recharge all of the capacitors holding a 1 before they discharge. To do this, the memory controller reads the memory and then writes it right back. This refresh operation happens automatically thousands of times per second.

The capacitor in a dynamic RAM memory cell is like a leaky bucket. It needs to be refreshed periodically or it will discharge to 0. This refresh operation is where dynamic RAM gets its name. Dynamic RAM has to be dynamically refreshed all of the time or it forgets what it is holding. The downside of all this refreshing is that it takes time and slows down the memory.

Memory Cells and DRAM

Memory is made up of bits arranged in a two-dimensional grid.

Memory cells are etched onto a silicon wafer in an array of columns (bitlines) and rows (wordlines). The intersection of a bitline and wordline constitutes the address of the memory cell.

DRAM works by sending a charge through the appropriate column (CAS) to activate the transistor at each bit in the column. When writing, the row lines contain the state the capacitor should take on. When reading, the sense-amplifier determines the level of charge in the capacitor. If it is more than 50 percent, it reads it as a 1; otherwise it reads it as a 0. The counter tracks the refresh sequence based on which rows have been accessed in what order. The length of time necessary to do all this is so short that it is expressed in nanoseconds (billionths of a second). A memory chip rating of 70ns means that it takes 70 nanoseconds to completely read and recharge each cell.

Memory cells alone would be worthless without some way to get information in and out of them. So, the memory cells have a whole support infrastructure of other specialized circuits. These circuits perform functions such as:

  • Identifying each row and column (row address select and column address select)
  • Keeping track of the refresh sequence (counter)
  • Reading and restoring the signal from a cell (sense amplifier)
  • Telling a cell whether it should take a charge or not (write enable)
  • Other functions of the memory controller include a series of tasks that include identifying the type, speed and amount of memory and checking for errors.

Static RAM works differently from DRAM. We’ll look at how in the next section.

Static RAM

Static RAM uses a completely different technology. In static RAM, a form of flip-flop holds each bit of memory (see How Boolean Logic Works for details on flip-flops). A flip-flop for a memory cell takes four or six transistors along with some wiring, but never has to be refreshed. This makes static RAM significantly faster than dynamic RAM. However, because it has more parts, a static memory cell takes up a lot more space on a chip than a dynamic memory cell. Therefore, you get less memory per chip, and that increases its price.

Static RAM is fast and expensive, and dynamic RAM is less expensive and slower. So static RAM is used to create the CPU’s speed-sensitive cache, while dynamic RAM forms the larger system RAM space.

Memory chips in desktop computers originally used a pin configuration called dual inline package (DIP). This pin configuration could be soldered into holes on the computer’s motherboard or plugged into a socket that was soldered on the motherboard. This method worked fine when computers typically operated on a couple of megabytes or less of RAM, but as the need for memory grew, the number of chips needing space on the motherboard increased.

The solution was to place the memory chips, along with all of the support components, on a separate printed circuit board (PCB) that could then be plugged into a special connector (memory bank) on the motherboard. Most of these chips use a small outline J-lead (SOJ) pin configuration, but quite a few manufacturers use the thin small outline package (TSOP) configuration as well. The key difference between these newer pin types and the original DIP configuration is that SOJ and TSOP chips are surface-mounted to the PCB. In other words, the pins are soldered directly to the surface of the board, not inserted in holes or sockets.

Memory chips are normally only available as part of a card called a module. When you shop for memory, on many of the modules you can see the individual memory chips.

A continuation of the previous chapter

Types of RAM

 

  • DDR RAM memory
  • An engineer holds a DDR SDRAM memory chip. SUTTHIWAT SRIKHRUEADAM/GETTY IMAGES

The following are some common types of RAM:

  • SRAM: Static random access memory uses multiple transistors, typically four to six, for each memory cell but doesn’t have a capacitor in each cell. It is used primarily for cache.
  • DRAM: Dynamic random access memory has memory cells with a paired transistor and capacitor requiring constant refreshing.
  • FPM DRAM: Fast page mode dynamic random access memory was the original form of DRAM. It waits through the entire process of locating a bit of data by column and row and then reading the bit before it starts on the next bit. Maximum transfer rate to L2 cache is approximately 176 Mbps.
  • EDO DRAM: Extended data-out dynamic random access memory does not wait for all of the processing of the first bit before continuing to the next one. As soon as the address of the first bit is located, EDO DRAM begins looking for the next bit. It is about 5-20 percent faster than FPM DRAM. Maximum transfer rate to L2 cache is approximately 264 Mbps.
  • SDRAM: Synchronous dynamic random access memory takes advantage of the burst mode concept to greatly improve performance. It does this by staying on the row containing the requested bit and moving rapidly through the columns, reading each bit as it goes. The idea is that most of the time the data needed by the CPU will be in sequence. SDRAM is about 5 percent faster than EDO RAM and has a transfer rate of 0.8-1.3 megatransfers per second (MT/s). It was developed in 1988.
  • DDR SDRAM: This is the next generation of SDRAM. Double data rate synchronous dynamic RAM is just like SDRAM except that is has higher bandwidth, meaning greater speed. Its transfer rate is 2.1-3.2 MT/s. DDR was released in 2000 and has advanced three subsequent generations. DDR2 (2003) has a transfer rate of 4.2-6.4 MT/s and DDR3 (2007) transfers data at 8.5-14.9 MT/s. The most recent generation in widespread use is DDR4, launched in 2014. Its transfer rate is 17-21.3 MT/s. These standards are set by the Joint Electron Device Engineering Council (JEDEC), an organization made up of electronics companies. JEDEC released its specification for DDR5 in July 2020. RAM manufacturer Micron believes the new standard will increase performance by 87 percent when compared with a DDR4 module.
  • RDRAM: Rambus dynamic random access memory is a radical departure from the previous DRAM architecture. Designed by Rambus, RDRAM uses a Rambus in-line memory module (RIMM), which is similar in size and pin configuration to a standard DIMM. What makes RDRAM so different is its use of a special high-speed data bus called the Rambus channel. RDRAM memory chips work in parallel to achieve a data rate of 800 MHz, or 1,600 Mbps or higher. Since they operate at such high speeds, they generate much more heat than other types of chips. To help dissipate the excess heat Rambus chips are fitted with a heat spreader, which looks like a long thin wafer. Just like there are smaller versions of DIMMs, there are also SO-RIMMs, designed for notebook computers.
  • Credit Card Memory: Credit card memory is a proprietary self-contained DRAM memory module that plugs into a special slot for use in notebook computers.
  • PCMCIA Memory Card: Another self-contained DRAM module for notebooks, cards of this type are not proprietary and should work with any notebook computer whose system bus matches the memory card’s configuration. They are rarely used nowadays.
  • CMOS RAM: CMOS RAM is a term for the small amount of memory used by your computer and some other devices to remember things like hard disk settings. This memory uses a small battery to provide it with the power it needs to maintain the memory contents.
  • VRAM: VideoRAM, also known as multiport dynamic random access memory (MPDRAM), is a type of RAM used specifically for video adapters or 3-D accelerators. The “multiport” part comes from the fact that VRAM normally has two independent access ports instead of one, allowing the CPU and graphics processor to access the RAM simultaneously. Located on the graphics card, VRAM comes in a variety of formats, many of which are proprietary. The amount of VRAM is a determining factor in the resolution and color depth of the display. VRAM is also used to hold graphics-specific information such as 3-D geometry data and texture maps. True multiport VRAM tends to be expensive, so many graphics cards use SGRAM (synchronous graphics RAM) instead. Performance is nearly the same, but SGRAM is cheaper.

Memory Modules

The kinds of board and connector used for RAM in desktop computers have evolved over the past few years. The first types were proprietary, meaning that different computer manufacturers developed memory boards that would only work with their specific systems.

Then came SIMM, which stands for single in-line memory module. This memory board used a 30-pin connector and was about 3.5 x 0.75 inches in size (about 9 x 2 cm). In most computers, you had to install SIMMs in pairs of equal capacity and speed. This is because the width of the bus is more than a single SIMM.

For example, you would install two 8-megabyte (MB) SIMMs to get 16 megabytes total RAM. Each SIMM could send 8 bits of data at one time, while the system bus could handle 16 bits at a time. Later SIMM boards, slightly larger at 4.25 x 1 inch (about 11 x 2.5 cm), used a 72-pin connector for increased bandwidth and allowed for up to 256MB of RAM. SIMM was used from the early 1980s to early 2000s.

As processors grew in speed and bandwidth capability, the industry adopted a new standard in dual in-line memory module (DIMM). DIMMs range in capacity and can be installed singly instead of in pairs.

Some brands of laptop computers use RAM based on the small outline dual in-line memory module (SODIMM) configuration. SODIMM cards are small, about 2 x 1 inch (5 x 2.5 cm) and have 144 or 200 pins. Capacity ranges from 2 to 32GB per module. Some sub-notebook computers use even smaller DIMMs, known as MicroDIMMs. The industry has been moving to low-power DDR4 modules in thinner and lighter laptops, because they use less energy and are more compact. Unfortunately, they must be soldered into place, meaning the average user can’t replace the original RAM.

Most memory available today is highly reliable. Most systems simply have the memory controller check for errors at startup and rely on that. Memory chips with built-in error-checking typically use a method known as parity to check for errors. Parity chips have an extra bit for every 8 bits of data. The way parity works is simple. Let’s look at even parity first.

When the 8 bits in a byte receive data, the chip adds up the total number of 1s. If the total number of 1s is odd, the parity bit is set to 1. If the total is even, the parity bit is set to 0. When the data is read back out of the bits, the total is added up again and compared to the parity bit. If the total is odd and the parity bit is 1, then the data is assumed to be valid and is sent to the CPU. But if the total is odd and the parity bit is 0, the chip knows that there is an error somewhere in the 8 bits and dumps the data. Odd parity works the same way, but the parity bit is set to 1 when the total number of 1s in the byte are even.

The problem with parity is that it discovers errors but does nothing to correct them. If a byte of data does not match its parity bit, then the data are discarded and the system tries again. Computers in critical positions need a higher level of fault tolerance. High-end servers often have a form of error-checking known as error-correction code (ECC). Like parity, ECC uses additional bits to monitor the data in each byte. The difference is that ECC uses several bits for error checking — how many depends on the width of the bus — instead of one.

A continuation of the continuation

ECC memory uses a special algorithm not only to detect single-bit errors, but actually correct them as well. ECC memory will also detect instances when more than one bit of data in a byte fails. Such failures are very rare, and they are not correctable, even with ECC.

When the 8 bits in a byte receive data, the chip adds up the total number of 1s. If the total number of 1s is odd, the parity bit is set to 1. If the total is even, the parity bit is set to 0. When the data is read back out of the bits, the total is added up again and compared to the parity bit. If the total is odd and the parity bit is 1, then the data is assumed to be valid and is sent to the CPU. But if the total is odd and the parity bit is 0, the chip knows that there is an error somewhere in the 8 bits and dumps the data. Odd parity works the same way, but the parity bit is set to 1 when the total number of 1s in the byte are even.

The problem with parity is that it discovers errors but does nothing to correct them. If a byte of data does not match its parity bit, then the data are discarded and the system tries again. Computers in critical positions need a higher level of fault tolerance. High-end servers often have a form of error-checking known as error-correction code (ECC). Like parity, ECC uses additional bits to monitor the data in each byte. The difference is that ECC uses several bits for error checking — how many depends on the width of the bus — instead of one. ECC memory uses a special algorithm not only to detect single-bit errors, but actually correct them as well. ECC memory will also detect instances when more than one bit of data in a byte fails. Such failures are very rare, and they are not correctable, even with ECC.

The majority of computers sold use nonparity memory chips. These chips do not provide any type of built-in error checking, but instead rely on the memory controller for error detection.

How Much RAM Do You Need?

It’s been said that you can never have enough money, and the same holds true for RAM, especially if you do a lot of graphics-intensive work or gaming. Next to the CPU itself, RAM is the most important factor in computer performance. If you don’t have enough, adding RAM can make more of a difference than getting a new CPU!

If your system responds slowly or accesses the hard drive constantly, then you need to add more RAM. If you are running Windows 10, Microsoft recommends 1GB as the minimum RAM requirement for the 32-bit version, and 2GB for 64-bit. If you’re upgrading to Windows 11, you’ll need at least 4GB. If you’re using a Mac with MacOS 11 (Big Sur) you’ll also need 4GB.

Linux has a reputation for working happily on systems with low system requirements, including RAM. Xubuntu, one popular low-requirement Linux distribution, requires a mere 512MB RAM. Xubuntu uses the lightweight Xfce desktop environment, which also works with other Linux distributions. Of course, there are distributions of Linux that have higher system requirements.

No matter what operating system you use, remember the minimum requirements are estimated for normal usage — accessing the internet, word processing, standard home/office applications and light entertainment. If you do computer-aided design (CAD), 3-D modeling/animation or heavy data processing, or if you are a serious gamer, then you will need more RAM. You may also need more RAM if your computer acts as a server of some sort (webpages, database, application, FTP or network).

Another question is how much VRAM you want on your video card. Almost all cards that you can buy today have at least 12-16MB of RAM. This is normally enough to operate in a typical office environment. You should probably invest in a higher-end graphics card if you want to do any of the following:

  • Play realistic games
  • Capture and edit video
  • Create 3-D graphics
  • Work in a high-resolution, full-color environment
  • Design full-color illustrations

When shopping for video cards, remember that your monitor and computer must be capable of supporting the card you choose.

What is a motherboard? And why is it needed?

What Is a Motherboard?

A motherboard is a circuit board with various components that work in unison to make a computer function.

We’ve established that a motherboard is the headquarters of a large corporation. But surely, the headquarters is not the only vital aspect to making a company successful. Just like there are different branches of a company, a motherboard has different parts within it that work together to transmit data to each other.

Form Factor

The form factor is basically how a motherboard is formed, or how it physically looks regarding its specifications (namely size, shape, and layout).

Take McDonald’s for instance. While all McDonald’s restaurants operate the same way, some are set up differently. Some have play centers, fancy self-ordering touch screens, and unbroken ice cream machines.

It’s the same with form factors. While all motherboards operate the same way, different models have different kinds of ports, dimensions, and mounting holes.

Popular form factors include:

  • ATX: The prom queen of form factors, the ATX is a popular choice and features large dimensions (most being 12 x 9.6 inches)
  • microATX: A smaller version of the standard ATX with fewer parameters
  • Mini-ATX: Smaller than the micro version, these are designed for mobile CPUs
  • Mini-ITX: Smaller than an ATX board (6.7 x 6.7 inches), the mini-ITX form factors are quiet and don’t use a lot of power
  • Nano-ITX: In between a Pico and Mini-ITX, this works well with thin devices
  • Pico-ITX: Really tiny with a 3.9 x 2.8 in. dimension size and holds up to 1 GB
  • Other discontinued form factors include BTX, LPX, and NLX.

 

Chipset

The chipset allows data to flow between various components, namely the CPU, peripherals, ATA drives, graphics, and memory.

It can be divided into these two categories:

  • Northbridge: Located on the “north” side of a chipset, it “bridges” together the following components: CPU, RAM, and PCIe
  • Southbridge: Located on the “south” side of a chipset, it “bridges” together the following components: BIOS, USB, SATA, and PCI

Think of a chipset like the CEO of a large company, with the Northbridge and Southbridge acting as the CFO and COO.

In business these three C’s (or the C-Suite) work together within the headquarters of a company to delegate tasks to their subordinates. In the case of motherboards, the C-Suite is comprised of the big bosses that make sure information is flowing between the subordinates (like the BIOS, CPU, RAM, etc.).

CPU Socket

This is basically a little habitat for the CPU to rest in. A CPU is a small square with a bunch of pins and connectors underneath it that help to interpret and transmit data carried out by the northbridge part of a chipset.

Think of the CPU like the overachieving office assistant to a CFO/COO. The office assistant resides in its own cubicle (or in this case, the CPU socket) to execute various kinds of tasks.

It’s like a CFO/COO telling an office assistant to schedule meetings, make phone calls, and go on coffee runs. The office assistant, or CPU, carries out these kinds of tasks (but in a more mathematical kind of way, as the CPU reads input and output instructions).

Having a high-quality CPU (and office assistant for the matter) is important to the overall speed and efficiency of a computer.

Slots

Think of slots like different branches/departments of a company.

Most companies have departments for things like marketing, human resources, accounting, research, etc.

Slots are like these kinds of departments for a motherboard, with branches like:

  • Memory/DIMM Slots: Used for holding memory/RAM
  • PCI: Connects expansion cards like video, network, and sound cards
  • PCIe: A modern version of PCI but with a different interface that can work with almost any kind of expansion card
  • USB: Used for USB connectors like flash drives, although not very common
  • SATA: Used for optical/hard disk/solid-state drives
  • Data Bus

All of the components mentioned above would not work in unison without the necessary data buses that connect everything together.

Think of data buses as a form of communication.

So in a large company, if the CFO/COO wants to tell an office assistant what to do, how would they go about it? Email? Phone? An in-person conversation? It doesn’t matter as long as there is some form of communication going on.

It’s the same idea with a motherboard. All of the components transmit data to one another through data buses.

Putting Them Together: How It All Works

When you turn your computer on, power is sent from the power supply on to the motherboard.

Data is transferred via data buses and goes through the northbridge and southbridge part of the chipset.

The northbridge part bridges data to the CPU, RAM, and PCIe. The RAM begins to send inputs to the CPU, which “interprets” these actions as an output. Data to the PCIe is then transferred to an expansion card, depending on which type you have.

The southbridge part bridges data to the BIOS, USB, SATA, and PCI. Signals to the BIOS allow your computer to boot up, while data to the SATA “awakens” your optical, hard disk, and solid-state drives. Data from the SATA is used to power up your video, network, and sound cards.

In short, a motherboard serves as the headquarters of a computer which transmits data via data buses. These data buses go through the northbridge and southbridge parts of a chipset, which then venture off into other components like the CPU, RAM, PCI, PCIe, etc.

Everything works together like a successful corporation, albeit in a more binary sort of way.

How does the mouse and keyboard work?

We are going to assume that you have an optical mouse for this guide.

The mouse

The optical mouse actually uses a tiny camera to take 1,500 pictures every second. Able to work on almost any surface, the mouse has a small, red light-emitting diode (LED) that bounces light off that surface onto a complementary metal-oxide semiconductor (CMOS) sensor.

The CMOS sensor sends each image to a digital signal processor (DSP) for analysis. The DSP, operating at 18 MIPS (million instructions per second), is able to detect patterns in the images and see how those patterns have moved since the previous image. Based on the change in patterns over a sequence of images, the DSP determines how far the mouse has moved and sends the corresponding coordinates to the computer. The computer moves the cursor on the screen based on the coordinates received from the mouse. This happens hundreds of times each second, making the cursor appear to move very smoothly.

The keyboard

 

Internal Working of the Keyboards

 

The Key Matrix

The keyboard has its own processor and circuitry, a majority of which forms an important component called key matrix. The key matrix is a collection of circuits under the keyboard, which is broken at a specific point under every key, which results in making the circuit incomplete. When you press any particular key, it completes this circuit, thus, enabling the processor to determine the location of the key that was pressed.

Working of the Keys

Beneath each key, there is a little hole, which is at the top of a long, round bar. You might be able to observe this if you try to dismantle a key off the device and notice it. Now, when you press a key, this bar pushes through the hole, thus making contact with the circuit layers below. Inside the hole, there’s a little tiny piece of rubber that prevents the key from moving down and pushes it back up when released. The springing factor of the keys is due to this reason.

Detection of Keypresses

When you type or press any key, a switch is pressed, which completes the circuit and allows a tiny amount of current to flow. A processor analyzes the position of the keys pressed and sends this information to the computer, where it is sent to something called the ‘keyboard controller’. This controller processes the information that is sent by the keyboard’s processor, and, in turn, sends it to the operating system (OS). The OS then checks this data to analyze if it contains any system level commands, like Ctrl+Shift+Esc, which is the keypress to bring up the Task Manager. If such system level commands are present, the computer executes them; if not, it forwards the information to the current application. The application then checks if the keypresses relate to commands in the application, like Ctrl+P, which is the keypress for the print command. Again, if there are such commands, they are executed first, and if not, then these keypresses are accepted as content or data. All this happens in a fraction of a second, so even if you press many keys, there is no lag in the system. What actually happens behind the scenes is that, there are three separate layers of plastic. Two of them are covered in electrically conducting metal tracks, and there’s an insulating layer between them with holes in it. You can see dots at the places where the keys press the two conducting layers together. There are lines, which are electric connections allowing tiny electric currents to flow when the layers are pressed tight to one another by a key moving down from above.

Character Mapping

The key matrix has a corresponding chart or character map that is stored in the read-only memory (ROM) of the computer. When you press a key, the processor looks up the position of the circuit that was closed, or completed, with the character map, and determines which key was pressed. All the keys are mapped and stored in the memory. For example, in the character map, if just the location of the ‘x’ key is determined to be pressed, then the resulting lower case alphabet ‘x’ will be displayed or taken as a keypress, but if the locations of the ‘Shift’ and ‘x’ keys have been determined to be pressed, then the resulting uppercase character ‘X’ will be displayed or taken as a keypress.

To put it simply, keyboards use switches and circuits to change keystrokes to a format the computer understands. Every keyboard contains a processor that does the work of translating the keystrokes, or the keys pressed, to the computer.

Types of Switches

These are the two types of switches that are used to complete circuits in keyboards. Some of them use a capacitive process, instead of the mechanical one described above. In this process, the circuit is not broken and current passes through it continuously. However, each individual key has a plate attached to it that moves closer to the circuit when pressed. This movement registers with the key matrix, causing a change in the electric current flowing through the circuit. This change is then compared to the character map, and the location of the key pressed is determined.

Mechanical switches include rubber dome switches, membrane switches, metal contact switches, and foam element switches. Of these, rubber dome switches are the most common, as they have a good tactile response and are fairly resistant to spills and corrosion, in addition to being relatively inexpensive and easy to manufacture.

Though there are various types of keyboards, like wireless, Bluetooth, and USB keyboards, they all use the same principle of completing a circuit to determine a keypress, so as to perform a function.

Written by Gustavo Fring

I hope you enjoy the Guide we share about METAL GEAR RISING: REVENGEANCE – Gameplay Basics for Beginners; if you think we forget to add or we should add more information, please let us know via commenting below! See you soon!


Be the first to comment

Leave a Reply

Your email address will not be published.


*