Wednesday, August 20, 2008

question bank

Personal Questions

Tell about yourself?
Areas of interest?
Different Subjects you covered in Your Engineering course?
What are your final year electives?
How comfortable are you with c and c++?
How comfortable are you with pointers?
What kind of software process did u follow?
What is your understanding about storage domain?
Tell me some problems you solved and how you did that?

Data structure Questions

1 .What all data structure algorithms are you familiar with?
2. What are all different sorting algorithms are you familiar with? What are the differences between them and the complexity levels?
3. What are the different search algorithms you are familiar with? When do we use which algorithm?
4. Differences between linked lists and arrays? What are the different types of linked lists?
5. What is the difference between Stack and Queue?
6. How do you detect the beginning and end of a circular queue or linked list i.e. a queue full condition? A queue empty condition or that a queue
has space for insertion?
7. How many iterations are required to search an n element binary tree?
8. What do you know about loops in a linked list?
9. Difference between Mutex and Semaphore? Give examples
10. What DS is used to represent Images?

C/C++ Questions

1. What are the Different Storage classes?
2. What do you understand by inheritance?
3. What do you understand by Polymorphism?
4. Give examples for Compile Time and Run time polymorphism?
5. What is operator overloading?
6. What is static in c and c++?
7. How is static variable in c different from a global variable?
8. How do you describe s/w development life cycle?
9. Difference between pointers and arrays?
10. Difference between pass by value and pass by reference? Printf and scanf are egs of what? Pass by value or ref.
11. About Source control tool/source control system?
12. How you test your code?
13. How you do debugging?
14. What are protected members in a class?
15. What are virtual Functions?
16. Difference between Union and Structure? Example. of union. Also justify why to use union instead of enum at times.
17. What do you know about deadlock hierarchy?
18. Explain copy constructor, why do we need it?
19. Explain difference between static binding and dynamic binding.
If we had to run an application on a cell phone after compiling it, what all would you need to put on cell phone in case of both static and dynamic.
20. Explain abstract classes.
21. Printf, Scanf pass by value or pass by reference?
22. Variable name in printf pass by value or pass by reference?
23. Example of union?
24. Use enum instead of union? Is that better? Fields in that union example?
25. What are virtual functions?
26. How would pointer to class look like?
27. Copy constructor – Why do we need that?
28. Dynamic Linked Library and Static Linked Library? Difference?
29. What is an abstract class?
30. How do runtime binding occur?
----------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------
O.S.
What do u understand by a process and a thread?
A process, in the simplest terms, is an executing program.
A thread is the basic unit to which the operating system allocates processor time.
One or more threads run in the context of the process.
Each process provides the resources needed to execute a program. Each process is started with a single thread, often called the primary thread, but can create additional threads from any of its threads.
All threads of a process share its virtual address space and system resources.

What r diff. IPC techniques?
Inter-Process Communication (IPC) is a set of techniques for the exchange of data among two or more threads in one or more processes.
IPC techniques are divided into methods for message passing, synchronization, shared memory, and remote procedure calls (RPC).

Message passing is a form of communication. Forms of messages include function invocation, signals, and data packets. It passes messages between one kernel and one or more server blocks.
Process synchronization refers to the coordination of simultaneous threads or processes to complete a task in order to get correct runtime order and avoid unexpected race conditions.
Shared memory is a memory that may be simultaneously accessed by multiple programs with intent to provide communication among them or avoid redundant copies.

Remote procedure call (RPC) is an Inter-process communication technology that allows a computer program to cause a subroutine or procedure to execute in another address space
An RPC is initiated by the client sending a request message to a known remote server in order to execute a specified procedure. An important difference between remote procedure calls and local calls is that remote calls can fail because of unpredictable network problems.

What r diff. multi-thread synchronization techniques ? Difference between mutex and semaphore along with e.g.
Compare n Swap – It gets the old value from a memory location. Based on this value, we compute a new value. By this time, if the old value is still in the same location, replace it with new value, else not.

Semaphore – is a protected variable which constitutes the classic method for restricting access to shared resources, such as shared memory, in a multiprogramming environment.

Mutex – mutex is a binary semaphore, initialized to 1. When access to shared resource is granted, it changes to zero. So no one else can access the same resource.

What do u understand by deadlocks?
A situation where two or more processes are waiting indefinitely for an event that can be caused by one of the waiting processes is called a deadlock.

What r diff. OS scheduling algos?
First come First Served.
Shortest Job First.
Priority Based.
Round Robin.

Wat do u understand by thread priorities?
Not all threads are created equal. Sometimes you want to give one thread more time than another. Threads that interact with the user should get very high priorities. Threads that calculate in the background should get low priorities.
Thread priorities are defined as integers between 1 and 10. Ten is the highest priority. One is the lowest. The normal priority is five. Higher priority threads get more CPU time.
What do u understand by context switch n how does it occur?
A context switch is the computing process of storing and restoring the state (context) of a CPU such that multiple processes can share a single CPU resource. The context switch is an essential feature of a multitasking operating system.
It occurs during:
Multi Tasking, Interrupt handling, User to Kernel Switch.

What do u understand by a device driver?
Device driver is higher level comp. program used to interact with hardware. The drivers are h/w dependent and OS specific.

What do u understand by a virtual memory?
It is a computer system technique which gives an application program the impression that it has contiguous working memory, while in fact it may be physically fragmented and may even overflow on to disk storage. Systems that use this technique make programming of large applications easier and use real physical memory (e.g. RAM) more efficiently than those without virtual memory.

What do u understand by a cache?
Cache is the fastest memory buffer placed between main memory n CPU, used to accommodate speed differentials.

What r diff. mechanisms to map device memory to host memory?



What do u understand by stack memory / heap memory?
A heap memory pool is an internal memory pool created at start-up that tasks use to dynamically allocate memory as needed. This memory pool is used by tasks that require a lot of memory from the stack, such as tasks that use wide columns.
Stacks in computing architectures are regions of memory where data is added or removed in a Last-In-First-Out manner. Because the data is added and removed in a last-in-first-out manner, stack allocation is very simple and typically faster than heap allocation. Another advantage is that memory on the stack is automatically reclaimed when the function exits, which can be convenient for the programmer.
A disadvantage of stack based memory allocation is that a thread's stack size can be as small as a few dozen kilobytes. Allocating more memory on the stack than is available can result in a crash due to stack overflow. Another disadvantage is that the memory stored on the stack is automatically deallocated when the function that created it returns, and thus the function must copy the data if they should be available to other parts of the program after it returns.
What do u mean by paging ? explain in context of swapping.
When a program is selected for execution, the system brings it into virtual storage, divides it into pages of four kilobytes, and transfers the pages into central storage for execution. To the programmer, the entire program appears to occupy contiguous space in storage at all times. Actually, not all pages of a program are necessarily in central storage, and the pages that are in central storage do not necessarily occupy contiguous space.
This movement of pages between auxiliary storage slots and central storage frames is called paging. Paging is key to understanding the use of virtual storage in z/OS.
What do u understand by segmentation & fragmentation?
In computer storage, fragmentation is a phenomenon that storage space is used inefficiently, reducing storage capacity. The term is also used to denote the wasted space itself.
There are three different but related forms of fragmentation: external fragmentation, internal fragmentation, and data fragmentation.
Internal fragmentation occurs when storage is allocated without ever intending to use it. This space is wasted.
External fragmentation is the phenomenon in which free storage becomes divided into many small pieces over time.
Data fragmentation occurs when a piece of data in memory is broken up into many pieces that are not close together.

View memory as a collection of variable-sized segments, rather than a linear array of bytes. Each segment can have its own protection, grow independently, etc.

Advantages:
Memory protection added to segment table like paging.
Sharing of memory similar to paging (but per area rather than per page).

Drawbacks:
Allocation algorithms as for memory partitions.
External fragmentation, back to compaction problem.
----------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------
Further Explanations :
-------------------------------------------------------------------------------------------------------------------------------------------
Process and Threads
Thread is called as Light weight Process(LWP).
In most multithreading operating systems, a process gets its own memory address space; a thread doesn't. Threads typically share the heap
belonging to their parent process.
Typically, even though they share a common heap, threads have their own stack space. This is how one thread's invocation of a method is kept
separate from another's.
similarities between process and threads are:
1)share cpu2)sequential execution 3)create child 4)if one thread is blocked then the next will be start to run like process
dissimilarities:
1)threads are not independent like process. 2)all threads can access every address in the task unlike process. 3)threads are design to assist onr another and process might or not might be assisted on one another.
-------------------------------------------------------------------------------------------------------------------------------------------
Deadlocks
A deadlock is a situation wherein two or more competing actions are waiting for the other to finish, and thus neither ever does.
All Necessary conditions for a deadlock to occur:
1. Mutual exclusion condition: a resource that cannot be used by more than one process at a time. 2. Hold and wait condition: processes already holding resources may request new resources. it will keep waiting and thus others cant use the existing resources.3. No preemption condition: only a process holding a resource may release it. pre-emption - preemption in computing is the act of temporarily interrupting a task being carried out. forcibly taking resources
held. pre-emption techniques are : time-slice, time-sharing etc.4. Circular wait condition: two or more processes form a circular chain where each process waits for a resource that the next process in the chain holds.
Prevention :make sure one of the above conditions is not satisfied.
Avoidance :Deadlock can be avoided if certain information about processes is available in advance of resource allocation.eg: 1. bankers algorithm - The algorithm prevents deadlock by denying or postponing the request if it determines that accepting the request could
put the system in an unsafe state (one where deadlock could occur).
2. wait and wound3. wait and die
Recovering :checkpointing and rollback.
-------------------------------------------------------------------------------------------------------------------------------------------
Segmentation
Its meaningful unit of information eg procedures,large data structures etc. In segmentation every segment is in its own logical memory.
It has higher hit ratio meaningful protection of segments simplified sharing of code via dynamic linking dynamic data structures
OR
Segmentation is one of the most common ways to achieve memory-protection; another common one is paging. Segmentation means that a part or
parts of the memory will be sealed off from the currently running process computing, through the use of hardware-register. If the data that
is about to be read or written to be outside the permitted address-space of that process, a segmentation-fault will result.
Segmentation is a memory-management scheme that supports this user view of memory. A logical address space is actually a collection of
segments. Each segment has a name and a length. The address specifies both the segment name and the offset within the segment. The user
therefore specifies each address by 2 parameter: a segment name and an offset.
-------------------------------------------------------------------------------------------------------------------------------------------
Types of fragmentation
Horizontally fragmented data means that data is distributed across different sites based on one or more primary keys. This type of data
distribution is typical where, for example, branch offices in an organization deal mostly with a set of local customers and the related
customer data need not be accessed by other branch offices.
Vertically Fragmented Data is data that has been split by columns across multiple systems. The primary key is replicated at each site. For
example, a district office may maintain client information such as name and address keyed on client number while head office maintains client
account balance and credit information, also keyed on the same client number.
Fragmentation occurs in a dynamic memory allocation system when many of the free blocks are too small to satisfy any request.
External Fragmentation: External Fragmentation happens when a dynamic memory allocation algorithm allocates some memory and a small piece is
left over that cannot be effectively used. If too much external fragmentation occurs, the amount of usable memory is drastically reduced.
Total memory space exists to satisfy a request, but it is not contiguous.
Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed
sizes of allocated blocks. Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a
partition, but not being used
-------------------------------------------------------------------------------------------------------------------------------------------