Disadvantages Of Serial Processing Operating System
1) Serial Processing: The Serial Processing Operating Systems are those which Performs all the instructions into a Sequence Manner or the Instructions those are given by the user will be executed by using the FIFO Manner means First in First Out. All the Instructions those are Entered First in the System will be Executed First and the Instructions those are Entered Later Will be Executed Later.
I'm sure we've all heard the terms 64bit and 32bit thrown around, but what do they actually mean?
I'm pretty sure they have to do with the size of a memory address. On a 64bit machine, a reference to an object is 64 bits. But I want to dig a little deeper....
One often hears the phrase '64bit machine.' What part of the computer is actually geared toward the number of bits? Processor? Operating System?
What is the advantage of having larger memory addresses?
I could add more questions, but I think brief is better.
Thanks guys :D
7 Answers
64 bit refers to the width of registers, memory addressing space, etc. One benefit is the ability to address more than 4GB of memory.
Wikipedia has an article on 64-bit computing with more details.
Edit: The advantages to more memory are that the operating system and programs have more virtual addressing space--16 exabytes (17.2 billion GB)--and, more importantly, that more physical memory can be added to a system and addressed, causing less swapping of virtual memory to and from disk.
The advantage to wider registers and data buses are that it is easier and faster to move the same amount of data around. An operation that required two or more registers can now be done with one.
So, performance is typically increased when software is recompiled for 64 bits.
A disadvantage is that wider data can mean more space taken by the same data. For instance storing the number 300 requires nine bits. If it's stored in a 32 bit integer, 23 bits are wasted. In 64 bit, that wastage becomes 55 bits. So, without retooling, a simple recompile to 64 bit can yield faster, but slightly more bloated software.
Edit: Also there are 64-bit technology pages here:
- IBM: From the stacks: Making the transition to 64 bits
- IBM: Porting Linux applications to 64-bit systems
- IBM: 64-Bit Computing Decision-Maker’s Guide
- CodingHorror: Is it time for 64 bit on the desktop?
- ClassicArs: An Introduction to 64-bit Computing and x86-64
The difference is exactly 32 bit ;-)
You need 64 bit hardware (processor) to run a 64 bit OS.You need a 64 bit OS to run 64 bit software.This are the dependencies.
- In a 32 bit system you are limited to addressing 4 GiByte (2^32) memory, in a 64 bit there is a theoretical limit of 2^64 byte.
- 64 Bit software needs slightly more memory, mainly for pointers are 8 Bytes instead of 4
- on x86_64, 64 Bit executables need more memory, as there is an additional opcode for many instructions, and thus may run slower
- on x86_64, 64 Bit software can use more registers and has the potential to run faster
- 64bit systems can directly address significantly more memory
- 64bit systems can process data in chunks twice as large as 32bit, which helps some operations go more quickly
For some programs, like office automation suites, 32bit vs 64bit makes little observable difference.
But for other applications, such as databases, graphics/video processing, or hosting virtual machines, being able to reach more physical memory at once and being able to process more information with each instruction can make a huge difference in performance.
Note that today, many 32bit chips have 64bit extension functions, as many FPU (math) or SSMD (vector) operations are done in 64bit mode already.
See 32-bit Vs. 64-bit Systems: What's The Difference? for more.
CPU registers and memory addressing.
The system can reference (see) much more memory.
I think the best answer would be a comparison in x86 bit x64 assembler
When your x32 bit program registers a variable,for example an integer(5),the code is equivalent to this:
To Understand things better,'push X' is a shortcut to:
Those registeres are 32 bit(4 bytes long),which is the size of a pointer in any 32 bit programming language.
In 64 bit code,however the size is twice bigger and so are registers.Our register ESP exists in x64 executables,but its not widely used as it is in x32 executables.
Instead all of the registeres get a 'R' in front of their name(EAX becomes RAX,ESP becomes RSP,EDX becomes RDX and so on).
So our code in x64 executable won't be any different,except the shortcut for 'push X' will be
RSP has double the size of ESP - 64 bits,8 bytes.
The bottom line is that x64 bit executables use more memory than x32 bit executables.
Let's go back to the basics.
99% of computer these days are based on what is referred to as the Von Neumann architecture. Essentially, the computer is in a constant cycle of:
- Fetching a command from RAM
- Executing the command on the CPU
(source: wikimedia.org)
When referring to 32/64-bit system (or any other bit size), essentially you are talking about the architecture and implementation of the computer:
- size of memory space (RAM)
- size of CPU registers
- bus size (i.e. between CPU, RAM, I/O, etc...)
If you have a 64-bit system, you have an address space of 2^64
. This is why 32-bit system cannot have more than 4GB of RAM. How can you address a memory space which is larger than 2^32
?
Regarding the performance differences, there is no clear cut answer (just as there is no clear answer if CISC or RISC architecture is better). It vastly depends on the applications you are using.
To sum: a 64-bit architecture is simply a different way to build a computer. It does not mean it is better, or worse, or does things differently (on a low level, every computer is doing fetch-execute). It's simply a different way of implementing a computer.
Exactly the 64bit or 32bit references just to the width of the main bus.
Types Of Operating Systems
Not the answer you're looking for? Browse other questions tagged 64-bit32-bit32bit-64bit or ask your own question.
Home > Articles > Software Development & Management
␡- 2 Multiprocessor Operating System Types
8.1.2 Multiprocessor Operating System Types
Let us now turn from multiprocessor hardware to multiprocessor software, in particular, multiprocessor operating systems. Various organizations are possible. Below we will study three of them.
Each CPU Has Its Own Operating System
The simplest possible way to organize a multiprocessor operating system is to statically divide memory into as many partitions as there are CPUs and give each CPU its own private memory and its own private copy of the operating system. In effect, the n CPUs then operate as n independent computers. One obvious optimization is to allow all the CPUs to share the operating system code and make private copies of only the data, as shown in Fig. 8-6.
Figure 8-6 Partitioning multiprocessor memory among four CPUs, but sharing a single copy of the operating system code. The boxes marked Data are the operating system's private data for each CPU.
This scheme is still better than having n separate computers since it allows all the machines to share a set of disks and other I/O devices, and it also allows the memory to be shared flexibly. For example, if one day an unusually large program has to be run, one of the CPUs can be allocated an extra large portion of memory for the duration of that program. In addition, processes can efficiently communicate with one another by having, say a producer be able to write data into memory and have a consumer fetch it from the place the producer wrote it.
Still, from an operating systems' perspective, having each CPU have its own operating system is as primitive as it gets.
It is worth explicitly mentioning four aspects of this design that may not be obvious. First, when a process makes a system call, the system call is caught and handled on its own CPU using the data structures in that operating system's tables.
Second, since each operating system has its own tables, it also has its own set of processes that it schedules by itself. There is no sharing of processes. If a user logs into CPU 1, all of his processes run on CPU 1. As a consequence, it can happen that CPU 1 is idle while CPU 2 is loaded with work.
Third, there is no sharing of pages. It can happen that CPU 1 has pages to spare while CPU 2 is paging continuously. There is no way for CPU 2 to borrow some pages from CPU 1 since the memory allocation is fixed.
Fourth, and worst, if the operating system maintains a buffer cache of recently used disk blocks, each operating system does this independently of the other ones. Thus it can happen that a certain disk block is present and dirty in multiple buffer caches at the same time, leading to inconsistent results. The only way to avoid this problem is to eliminate the buffer caches. Doing so is not hard, but it hurts performance considerably.
Master-Slave Multiprocessors
For these reasons, this model is rarely used any more, although it was used in the early days of multiprocessors, when the goal was to port existing operating systems to some new multiprocessor as fast as possible. A second model is shown in Fig. 8-7. Here, one copy of the operating system and its tables are present on CPU 1 and not on any of the others. All system calls are redirected to CPU 1 for processing there. CPU 1 may also run user processes if there is CPU time left over. This model is called master-slave since CPU 1 is the master and all the others are slaves.
Figure 8-7 A master-slave multiprocessor model.
The master-slave model solves most of the problems of the first model. There is a single data structure (e.g., one list or a set of prioritized lists) that keeps track of ready processes. When a CPU goes idle, it asks the operating system for a process to run and it is assigned one. Thus it can never happen that one CPU is idle while another is overloaded. Similarly, pages can be allocated among all the processes dynamically and there is only one buffer cache, so inconsistencies never occur.
The problem with this model is that with many CPUs, the master will become a bottleneck. After all, it must handle all system calls from all CPUs. If, say, 10% of all time is spent handling system calls, then 10 CPUs will pretty much saturate the master, and with 20 CPUs it will be completely overloaded. Thus this model is simple and workable for small multiprocessors, but for large ones it fails.
Symmetric Multiprocessors
Our third model, the SMP (Symmetric MultiProcessor), eliminates this asymmetry. There is one copy of the operating system in memory, but any CPU can run it. When a system call is made, the CPU on which the system call was made traps to the kernel and processes the system call. The SMP model is illustrated in Fig. 8-8.
Linux Operating System Disadvantages
Figure 8-8 The SMP multiprocessor model.
This model balances processes and memory dynamically, since there is only one set of operating system tables. It also eliminates the master CPU bottleneck, since there is no master, but it introduces its own problems. In particular, if two or more CPUs are running operating system code at the same time, disaster will result. Imagine two CPUs simultaneously picking the same process to run or claiming the same free memory page. The simplest way around these problems is to associate a mutex (i.e., lock) with the operating system, making the whole system one big critical region. When a CPU wants to run operating system code, it must first acquire the mutex. If the mutex is locked, it just waits. In this way, any CPU can run the operating system, but only one at a time.
This model works, but is almost as bad as the master-slave model. Again, suppose that 10% of all run time is spent inside the operating system. With 20 CPUs, there will be long queues of CPUs waiting to get in. Fortunately, it is easy to improve. Many parts of the operating system are independent of one another. For example, there is no problem with one CPU running the scheduler while another CPU is handling a file system call and a third one is processing a page fault.
This observation leads to splitting the operating system up into independent critical regions that do not interact with one another. Each critical region is protected by its own mutex, so only one CPU at a time can execute it. In this way, far more parallelism can be achieved. However, it may well happen that some tables, such as the process table, are used by multiple critical regions. For example, the process table is needed for scheduling, but also for the fork system call and also for signal handling. Each table that may be used by multiple critical regions needs its own mutex. In this way, each critical region can be executed by only one CPU at a time and each critical table can be accessed by only one CPU at a time.
Most modern multiprocessors use this arrangement. The hard part about writing the operating system for such a machine is not that the actual code is so different from a regular operating system. It is not. The hard part is splitting it into critical regions that can be executed concurrently by different CPUs without interfering with one another, not even in subtle, indirect ways. In addition, every table used by two or more critical regions must be separately protected by a mutex and all code using the table must use the mutex correctly.
Furthermore, great care must be taken to avoid deadlocks. If two critical regions both need table A and table B, and one of them claims A first and the other claims B first, sooner or later a deadlock will occur and nobody will know why. In theory, all the tables could be assigned integer values and all the critical regions could be required to acquire tables in increasing order. This strategy avoids deadlocks, but it requires the programmer to think very carefully which tables each critical region needs to make the requests in the right order.
As the code evolves over time, a critical region may need a new table it did not previously need. If the programmer is new and does not understand the full logic of the system, then the temptation will be to just grab the mutex on the table at the point it is needed and release it when it is no longer needed. However reasonable this may appear, it may lead to deadlocks, which the user will perceive as the system freezing. Getting it right is not easy and keeping it right over a period of years in the face of changing programmers is very difficult.
Related Resources
- eBook (Watermarked) $38.39
- Book $43.99
- eBook (Watermarked) $35.19