Computer
-
Rating
-
Date
December 1969 -
Size
2.2MB -
Views
17,243 -
Categories
Preview only show first 6 pages with water mark for full document please download
Transcript
GATE 2015 Praneeth A S (UG201110023) B.Tech(Computer Science & Engineering) Indian Institute of Technology Jodhpur Jodhpur, Rajasthan 342011, India August 4, 2014 Dedicated to my beloved parents who have stood beside me entire my career. Acknowledgements Many Sources have been used. Preface To be used for GATE Preparation. Contents I Algorithms 1 1 Test 2 2 Complexity Analysis(Asymptotic Notations) 2.1 Θ Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Big O Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Ω Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3 3 3 3 Solving Recurrences 3.1 Substitution Method . . 3.1.1 Example . . . . . 3.2 Recurrence Tree Method 3.3 Master Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 4 4 4 5 4 Sorting 4.1 Insertion Sort 3 4.2 Bubble Sort 4 . 4.3 Bucket Sort 9 . 4.4 Heap Sort 8 . . 4.5 Quick Sort 6 . . 4.6 Merge Sort 5 . . 4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 6 6 6 6 6 6 6 5 Searching 5.1 Linear Search . . . . . . 5.2 Binary Search . . . . . . 5.3 Interpolation Search [1] 5.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 7 7 7 7 . . . . . . . . . . . . . . . . . . . . . 6 Divide & Conquer 8 7 Greedy Algorithms 9 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 7.2 Spanning Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 7.3 Activity Selection Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 8 Dynamic Programming 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Fibonacci Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Ugly Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv 11 11 11 11 Contents Contents 9 NP Classes 12 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 9.2 Problem Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 II Data Structures 14 10 Hashing 15 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 10.2 ReHashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 III Theory Of Computation 18 11 Finite Automaton 20 12 Undecidability 21 12.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 13 Turing Machines 22 13.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 14 Regular Sets 24 14.1 Pumping Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 IV Computer Organization [3] 25 15 Machine Instructions & Addressing 15.1 Definitions . . . . . . . . . . . . . . 15.2 Design Principles . . . . . . . . . . 15.3 Machine Instructions [3, p.67,90] . Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Pipelining 16.1 Designing Instruction sets for pipelining 16.2 Pipeline Hazards . . . . . . . . . . . . . 16.2.1 Structural Hazards . . . . . . . . 16.2.2 Data Hazards . . . . . . . . . . . 16.2.3 Hazards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 26 27 28 29 29 30 30 30 30 17 Arithmetic for Computers 31 17.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 18 Speedup 32 V 33 First Order Logic 19 Propositional Logic 19.1 Introduction . . . . 19.2 Truth Table . . . . 19.3 Rules . . . . . . . . 19.4 Conditional Proof . 19.5 Indirect Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 34 35 37 37 38 Contents Contents 20 Predicate Logic 39 20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 VI Probability 40 21 Definitions 42 21.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 22 Probability 22.1 Terms . . . . . . . . . . . . . . . . . . . . . . 22.2 Propositions . . . . . . . . . . . . . . . . . . . 22.3 Conditional Probability . . . . . . . . . . . . 22.3.1 Bayes Formula . . . . . . . . . . . . . 22.4 Mean, Median, Mode and Standard Deviation 22.5 Distributions . . . . . . . . . . . . . . . . . . 22.5.1 Bernoulli . . . . . . . . . . . . . . . . 22.5.2 HyperGeometric . . . . . . . . . . . . 22.5.3 Uniform . . . . . . . . . . . . . . . . . 22.5.4 Normal . . . . . . . . . . . . . . . . . 22.5.5 Exponential . . . . . . . . . . . . . . . 22.5.6 Poisson . . . . . . . . . . . . . . . . . 22.5.7 Binomial . . . . . . . . . . . . . . . . 22.5.8 Summary . . . . . . . . . . . . . . . . VII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HTML 44 44 44 44 45 45 47 47 47 47 47 47 47 47 48 51 23 Basic Commands 52 23.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 23.2 Tags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 23.3 Tags Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 VIII Numerical Analysis 56 24 Numerical solutions to non-algebraic 24.1 Bisection Method . . . . . . . . . . . 24.2 Newton-Raphson Method . . . . . . 24.3 Secant Method . . . . . . . . . . . . linear . . . . . . . . . . . . equations 57 . . . . . . . . . . . . . . . . . . . . . 57 . . . . . . . . . . . . . . . . . . . . . 58 . . . . . . . . . . . . . . . . . . . . . 58 25 Numerical Integration 25.1 Introduction . . . . . 25.2 Trapezoidal Rule . . 25.3 Simpson’s 1/3 Rule . 25.4 Simpson’s 3/8 Rule . . . . . . . . . . . . . . . . . . . . . . . . . 26 LU decomposition for system 26.1 Introduction . . . . . . . . . 26.2 Factorizing A as L and U . 26.2.1 Example . . . . . . . 26.3 Algorithm For solving . . . . . . . . . . . of . . . . . . . . . . . . . . . . . . . . linear . . . . . . . . . . . . . . . . vi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 59 59 60 60 equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 61 61 61 62 . . . . . . . . . . . . . . . . . . . . Contents IX XML 27 Basic Information 27.1 Basics . . . . . . . . 27.2 Rules for XML Docs 27.3 XML Elements . . . 27.4 XML Attributes . . X Contents 63 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computer Networks 64 64 65 65 66 67 28 Network Security 28.0.1 Substitution Ciphers . . . . . 28.1 Public Key Algorithms . . . . . . . . 28.1.1 Diffie-Hellman Key Exchange 28.1.2 RSA . . . . . . . . . . . . . . 28.1.3 Knapsack Algorithm . . . . . 28.1.4 El Gamal Algorithm . . . . . 28.2 Digital Signatures . . . . . . . . . . 28.2.1 Symmetric Key Signatures . 28.2.2 Public Key Signatures . . . . 28.2.3 Message Digests . . . . . . . 28.3 Communication Security . . . . . . . 28.3.1 Firewalls . . . . . . . . . . . . . . . . . . . . . . . 68 68 68 68 68 69 69 69 69 69 70 72 72 29 Application Layer 29.1 DNS - Domain Name System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.1.1 Name Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.2 HTTP - Hyper Text Transfer Protocol . . . . . . . . . . . . . . . . . . . . . . . . 74 74 75 75 30 Routing Algorithms 30.1 Introduction . . . . . . . . . . . . . . . . . . 30.2 Shortest Path Algorithm - Dijsktra . . . . . 30.3 Flooding . . . . . . . . . . . . . . . . . . . . 30.4 Distance Vector Routing - Bellman Ford . . 30.5 Link State Routing . . . . . . . . . . . . . . 30.5.1 Learning abput neighbours . . . . . 30.5.2 Setting Link Costs . . . . . . . . . . 30.5.3 Building Link State Packets . . . . . 30.5.4 Distributing the Link State Packets 30.5.5 Computing Shortest Path . . . . . . 30.6 Hierarchial Routing . . . . . . . . . . . . . . 30.7 Broadcast Routing . . . . . . . . . . . . . . 30.8 Multicast Routing . . . . . . . . . . . . . . 30.9 IPv4 - Network Layer . . . . . . . . . . . . 30.9.1 Communication in Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 77 78 79 79 80 80 81 81 81 82 82 83 84 84 84 31 Error & Flow control - Data Link 31.1 Introduction . . . . . . . . . . . . 31.1.1 Byte Count . . . . . . . . 31.1.2 Byte Stuffing . . . . . . . 31.1.3 Bit Stuffing . . . . . . . . 31.2 Error Correction & Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 86 86 87 88 89 . . . . . . . . . . . . . . . . . . . . . . . . Layer . . . . . . . . . . . . . . . . . . . . vii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contents Contents 31.2.1 Error Correcting Codes . . . . . . . . . . 31.2.2 Error Detecting Codes . . . . . . . . . . . 31.3 Data Link Protocols . . . . . . . . . . . . . . . . 31.3.1 Simplex Protocol . . . . . . . . . . . . . . 31.3.2 Stop & Wait Protocol for Error free . . . 31.3.3 Stop & Wait Protocol for Noisy Channel . 31.3.4 Sliding Window Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 91 93 93 93 94 94 32 Data Link Layer Switching 98 32.1 Bridges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 32.1.1 Learning Bridges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 32.1.2 Spanning Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 33 Transport Layer 101 33.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 34 Internet Protocol - IP 102 34.1 IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 35 Network Layer 103 35.1 Subnetting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 35.2 Classless Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 36 Physical Layer 106 36.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 XI Calculus 107 37 Continuity 108 38 Differentiation 109 39 Integration 110 40 Definite Integrals & Improper Integrals 112 XII 113 Linear Algebra 41 Determinants 41.1 Introduction . . . . . . . . . 41.2 Properties . . . . . . . . . . 41.3 Determinants & Eigenvalues 41.4 Eigenvectors & Eigenvalues 41.5 Cramer’s rule . . . . . . . . 41.6 Rank of a Matrix . . . . . . 41.7 System of Linear Equations XIII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Set Theory & Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 114 114 115 115 117 117 117 119 42 Sets 120 viii Contents Contents 43 Relations 121 43.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 43.2 Other Properties Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 44 Functions 123 45 Group 124 46 Lattices 125 47 Partial Orders 126 XIV Combinatory 127 48 Generating Functions 128 49 Recurrence Relations 129 XV C Programs for Algorithms I [6] 130 50 Sorting 131 51 Spanning Trees 149 52 Dynamic Programming 154 XVI 169 C Programs for Testing Index 174 ix List of Algorithms 1 2 3 Calculate y = xn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Calculate the shortest path from source node to all destination nodes . . . . . . . 78 Calculate the shortest path from source node to all destination nodes . . . . . . . 79 x List of Figures 3.1 Master Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1 Turing machine accepting 0n 1n for n ≥ 1 5 . . . . . . . . . . . . . . . . . . . . . . 23 28.1 Digital Signatures using public key . . . . . . . . . . . . . . . . . . . . . . . . . . 69 28.2 SHA-1 algorithm from Alice to Bob . . . . . . . . . . . . . . . . . . . . . . . . . 70 28.3 Firewall Protecting internal network . . . . . . . . . . . . . . . . . . . . . . . . . 72 29.1 29.2 29.3 29.4 30.1 30.2 30.3 30.4 30.5 30.6 A portion of the Internet domain name space . . . . . . . . . . . . . . . . . . . Domain Name Space mapped to zones . . . . . . . . . . . . . . . . . . . . . . . DNS Resource Records Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . HTTP 1.1 with (a) multiple connections and sequential requests. (b)A persistent connection and sequential requests. (c) A persistent connection and pipelined requests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 . 74 . 75 Sample run of Dijsktra’s Algorithm . . . Sample run of Bellman Ford’s Algorithm Link Sate Packets . . . . . . . . . . . . . Packet Buffer for Router B . . . . . . . Hierarchial Routing . . . . . . . . . . . . IPv4 Header . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.1 Byte Count (a) Without errors. (b) With one error. . . . . . . . . . . . . . . . 31.2 Byte Stuffing (a) A frame delimited by flag bytes. (b) Four examples of byte sequences before and after byte stuffing. . . . . . . . . . . . . . . . . . . . . . . 31.3 Bit Stuffing (a) The original data. (b) The data as they appear on the line. (c) The data as they are stored in the receiver’s memory after destuffing. . . . . . 31.4 Hamming Codes for (11, 7) for even parity . . . . . . . . . . . . . . . . . . . . . 31.5 Convolution Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.6 Interleaving parity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.7 Calculating CRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.8 A sliding window of size 1, with a 3-bit sequence number. (a) Initially. (b) After the first frame has been sent. (c) After the first frame has been received. (d) After the first acknowledgement has been received. . . . . . . . . . . . . . . . . 31.9 (a) Normal case. (b) Abnormal case. The notation is (seq, ack, packet number). An asterisk indicates where a network layer accepts a packet. . . . . . . . . . . 31.10Pipelining and error recovery. Effect of an error when (a) receiver’s window size is 1(Go Back n) and (b) receiver’s window size is large(Selective Repeat). . . . 31.11(a) Initial situation with a window of size 7. (b) After 7 frames have been sent and received but not acknowledged. (c) Initial situation with a window size of 4. (d) After 4 frames have been sent and received but not acknowledged. . . . . xi . 76 78 80 81 82 83 85 . 87 . 88 . . . . . 89 90 91 92 93 . 94 . 95 . 96 . 97 List of Figures List of Figures 32.1 a) Bridge connecting two multidrop LANs. (b) Bridges (and a hub) connecting seven point-to-point stations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 32.2 Bridges with two parallel links. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 32.3 (a) Which device is in which layer. (b) Frames, packets, and headers. . . . . . . . 100 33.1 The primitives of Transport Layer. . . . . . . . . . . . . . . . . . . . . . . . . . . 101 33.2 The primitives of TCP socket. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 34.1 IPv4 Header . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 xii List of Tables 4.1 Summary of Sorting Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 5.1 Summary of Search Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 10.1 Empty Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 10.2 Empty Table for Quadratic Probing . . . . . . . . . . . . . . . . . . . . . . . . . 16 10.3 Empty Table for Double Hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 22.1 Probability Formula Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 22.2 Probability Formula Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 xiii List of listings 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Binary Search in C (Recursive) . . . . . . . . . . . . . . . . . Binary Search in C (Iterative) . . . . . . . . . . . . . . . . . . Insertion Sort in C . . . . . . . . . . . . . . . . . . . . . . . . Bubble Sort in C . . . . . . . . . . . . . . . . . . . . . . . . . Merge Sort in C . . . . . . . . . . . . . . . . . . . . . . . . . . Quick Sort in C . . . . . . . . . . . . . . . . . . . . . . . . . . Quick Sort in C (Iterative) . . . . . . . . . . . . . . . . . . . . Heap Sort in C . . . . . . . . . . . . . . . . . . . . . . . . . . Bucket Sort in C++ . . . . . . . . . . . . . . . . . . . . . . . Kruskal MST in C . . . . . . . . . . . . . . . . . . . . . . . . Fibonacci Numbers in C . . . . . . . . . . . . . . . . . . . . . Fibonacci Numbers in C(Memoization) . . . . . . . . . . . . . Fibonacci Numbers in C(Tabulation) . . . . . . . . . . . . . . Ugly Numbers Numbers in C . . . . . . . . . . . . . . . . . . Ugly Numbers in C(Dynamic Programming) . . . . . . . . . . Longest Increasing Subsequence in C . . . . . . . . . . . . . . Longest Increasing Subsequence in C(Dynamic Programming) Longest Common Subsequence in C . . . . . . . . . . . . . . Longest Common Subsequence in C(Dynamic Programming) Edit Distance in C . . . . . . . . . . . . . . . . . . . . . . . . xiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 133 135 136 139 141 143 147 148 153 154 155 156 158 160 162 163 164 165 168 Todo list Is the size of register = word size always? Interesting Point:The word size of an Architecture is often (but not always!) defined by the Size of the General Purpose Registers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Check for theorem’s name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Check for Newton’s forward difference polynomial formula . . . . . . . . . . . . . . . . . Check the formula for Error of Simpson’s 3/8 rule . . . . . . . . . . . . . . . . . . . . . Use a 32-bit sequence number. With one link state packet per second, it would take 137 2no. of bits . Here BW = 1lsp/sec. no. of bits = 32 . years to wrap around. Time = BW did not understand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suppose 111110 is in data, does receiver destuff that leading to wrong information (in Bit Stuffing)? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to calculate and detect errors in checksum? . . . . . . . . . . . . . . . . . . . . . . How are error detected in CRC? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Please understand seletive repeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Check about net mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Learn about DHCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Check this part for m, n comaprison condition for no, infintely many solutions. Also for m = n. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv . . . . 26 58 59 60 . 82 . 83 . . . . . . 88 92 92 96 103 104 . 118 Part I Algorithms 1 Chapter 1 Test Algorithm 1 Calculate y = xn Input: n ≥ 0 ∨ x 6= 0 Output: y = xn 1. y ← 1 2. if n < 0 then 3. X ← x1 4. N ← −n 5. else 6. X←x 7. N ←n 8. end if 9. while N 6= 0 do 10. if N is even then 11. X ←X ×X 12. N ← N/2 13. else 14. y ←y×X 15. N ←N −1 16. end if 17. end while // N is odd // end of while 2 Chapter 2 Complexity Analysis(Asymptotic Notations) 2.1 Θ Notation Θ((g(n)) = f(n): ∃ positive constants c1 ,c2 and n0 such that 0 ≤ c1 g(n) ≤ f (n) ≤ c2 g(n) 2.2 ∀n ≥ n0 (2.1) Big O Notation O((g(n)) = f(n): ∃ positive constants c0 and n0 such that 0 ≤ f (n) ≤ c0 g(n) 2.3 ∀n ≥ n0 (2.2) Ω Notation Ω((g(n)) = f(n): ∃ positive constants c0 and n0 such that 0 ≤ c0 g(n) ≤ f (n) 3 ∀n ≥ n0 (2.3) Chapter 3 Solving Recurrences 3.1 Substitution Method Definition 1 - Substitution Method. We make a guess for the solution and then we use mathematical induction to prove the the n guess is correct or incorrect. For example consider the recurrence T (n) = 2T + n. 2 3.1.1 Example We guess the solution as T (n) = O(n log n). Now we use induction to prove our guess. We need to prove that T (n) ≤ cn log n. We can assume that it is true for values smaller than n. n T (n) = 2T +n n n 2 log +n ≤c 2 2 = cn log n − cn log 2 + n = cn log n − cn + n ≤ cn log n 3.2 Recurrence Tree Method Read it from here Recurrence Tree Method 4 Chapter 3. Solving Recurrences 3.3 3.3. Master Method Master Method Figure 3.1: Master Theorem T (n) = aT n b + f (n) where a ≥ 1 and b > 1 3 cases: 1. If f(n) = Θ (nc ) where c < logb a then T (n) = Θ nlogb a 2. If f(n) = Θ (nc ) where c = logb a then T (n) = Θ (nc log n) 3. If f(n) = Θ (nc ) where c > logb a then T (n) = Θ (f (n)) 5 Chapter 4 Sorting 4.1 Insertion Sort 3 4.2 Bubble Sort 4 4.3 Bucket Sort 9 4.4 Heap Sort 8 4.5 Quick Sort 6 4.6 Merge Sort 5 4.7 Summary Sorting Method Insertion Sort Bucket Sort Bubble Sort Heap Sort Quick Sort Merge Sort Usage Time Complexity Best Average Worst Space Complexity No. of swaps Stability X X × × X Table 4.1: Summary of Sorting Methods Auxiliary Space is the extra space or temporary space used by an algorithm. Space Complexity of an algorithm is total space taken by the algorithm with respect to the input size. Space complexity includes both Auxiliary space and space used by input. Stability: If two objects with equal keys appear in the same order in sorted output as they appear in the input unsorted array. 6 Chapter 5 Searching 5.1 Linear Search 5.2 Binary Search 5.3 Interpolation Search [1] 5.4 This is a margin note using the geometry package, set at 0cm vertical offset to the first line it is typeset. Summary Method Time Complexity Table 5.1: Summary of Search Methods 7 Chapter 6 Divide & Conquer 8 Chapter 7 Greedy Algorithms 7.1 Introduction Greedy is an algorithmic paradigm that builds up a solution piece by piece, always choosing the next piece that offers the most obvious and immediate benefit. Greedy algorithms are used for optimization problems. An optimization problem can be solved using Greedy if the problem has the following property: At every step, we can make a choice that looks best at the moment, and we get the optimal solution of the complete problem. Examples: Kruskal MST, Prim MST, Dijsktra Shortest Path, Huffman Coding, Activity Selection Problem etc., 1. Kruskal’s Minimum Spanning Tree (MST): In Kruskal’s algorithm, we create a MST by picking edges one by one. The Greedy Choice is to pick the smallest weight edge that doesn’t cause a cycle in the MST constructed so far. 2. Prim’s Minimum Spanning Tree: In Prim’s algorithm also, we create a MST by picking edges one by one. We maintain two sets: set of the vertices already included in MST and the set of the vertices not yet included. The Greedy Choice is to pick the smallest weight edge that connects the two sets. 3. Dijkstra’s Shortest Path: The Dijkstra’s algorithm is very similar to Prim’s algorithm. The shortest path tree is built up, edge by edge. We maintain two sets: set of the vertices already included in the tree and the set of the vertices not yet included. The Greedy Choice is to pick the edge that connects the two sets and is on the smallest weight path from source to the set that contains not yet included vertices. 4. Huffman Coding: Huffman Coding is a loss-less compression technique. It assigns variable length bit codes to different characters. The Greedy Choice is to assign least bit length code to the most frequent character. 7.2 Spanning Trees MST Given a connected and undirected graph, a spanning tree of that graph is a subgraph that is a tree and connects all the vertices together. A single graph can have many different spanning trees. A minimum spanning tree (MST) or minimum weight spanning tree for a weighted, connected and undirected graph is a spanning tree with weight less than or equal to the weight of every other spanning tree. The weight of a spanning tree is the sum of weights given to each 9 Chapter 7. Greedy Algorithms 7.3. Activity Selection Problem edge of the spanning tree. A minimum spanning tree has (V − 1) edges where V is the number of vertices in the given graph. Kruskal’s MST Algorithm 7.3 Activity Selection Problem Definition 2 - Activity Selection Problem. You are given n activities with their start and finish times. Select the maximum number of activities that can be performed by a single person, assuming that a person can only work on a single activity at a time. 10 Chapter 8 Dynamic Programming 8.1 Introduction Dynamic Programming is an algorithmic paradigm that solves a given complex problem by breaking it into subproblems and stores the results of subproblems to avoid computing the same results again. Following are the two main properties of a problem that suggest that the given problem can be solved using Dynamic programming. Example: Floyd Warshall Algorithm, Bellman Ford Algorithm 1. Overlapping Subproblems: Dynamic Programming is mainly used when solutions of same subproblems are needed again and again. In dynamic programming, computed solutions to subproblems are stored in a table so that these donâĂŹt have to recomputed. Uses: Fibonacci Numbers 2. Optimal Substructure: A given problems has Optimal Substructure Property if optimal solution of the given problem can be obtained by using optimal solutions of its subproblems. For example the shortest path problem has following optimal substructure property: If a node x lies in the shortest path from a source node u to destination node v then the shortest path from u to v is combination of shortest path from u to x and shortest path from x to v. Uses: Longest Increasing Subsequence, Shortest Path. Two Approaches: 1. Memoization (Top Down) 2. Tabulation (Bottom Up) 8.2 Fibonacci Numbers 8.3 Ugly Numbers Definition 3 - Ugly Numbers. Ugly numbers are numbers whose only prime factors are 2, 3 or 5. The sequence 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, . . . shows the first 11 ugly numbers. By convention, 1 is included. See 15 for Dynamic Programming version of problem. 11 Chapter 9 NP Classes 9.1 Introduction Definition 4 - P. A decision problem that can be solved in polynomial time. That is, given an instance of the problem, the answer yes or no can be decided in polynomial time. Definition 5 - NP. NP - Non-Deterministic Polynomial Time. A decision problem where instances of the problem for which the answer is yes have proofs that can be verified in polynomial time. This means that if someone gives us an instance of the problem and a certificate (sometimes called a witness) to the answer being yes, we can check that it is correct in polynomial time. A non-deterministically solvable problem by a turing machine(non-deterministic turing machine) in polynomial time. Definition 6 - NP Complete. A decision problem L is NP-complete if it is in the set of NP problems and also in the set of NP-hard problems. NP-complete is a subset of NP, the set of all decision problems whose solutions can be verified in polynomial time; NP may be equivalently defined as the set of decision problems that can be solved in polynomial time on a non-deterministic Turing machine. A problem p in NP is also NP-complete if every other problem in NP can be transformed into p in polynomial time. A decision problem C is NP-complete if: 1. C is in NP, and 2. Every problem in NP is reducible to C in polynomial time. Consequence: If we had a polynomial time algorithm (on a UTM, or any other Turingequivalent abstract machine) for C, we could solve all problems in NP in polynomial time. ie., P = NP = NP-Complete Examples:Boolean satisfiability problem (Sat.), Knapsack problem, Hamiltonian path problem, Travelling salesman problem, Subgraph isomorphism problem, Subset sum problem, Clique problem, Vertex cover problem, Independent set problem, Dominating set problem, Graph coloring problem etc., Definition 7 - NP Hard. The precise definition here is that a problem X is NP-hard if there is an NP-complete problem Y such that Y is reducible to X in polynomial time. But since any NP-complete problem can 12 Chapter 9. NP Classes 9.2. Problem Definitions be reduced to any other NP-complete problem in polynomial time, all NP-complete problems can be reduced to any NP-hard problem in polynomial time. Then if there is a solution to one NP-hard problem in polynomial time, there is a solution to all NP problems in polynomial time. If a problem is NP-hard, this means I can reduce any problem in NP to that problem. This means if I can solve that problem, I can easily solve any problem in NP. If we could solve an NP-hard problem in polynomial time, this would prove P = NP. Examples: The halting problem is the classic NP-hard problem. This is the problem that given a program P and input I, will it halt? Definition 8 - Polynomial Reducible. Can arbitrary instances of problem Y be solved using a polynomial number of standard computational steps, plus a polynomial number of calls to a black box that solves problem X? Suppose Y ≤P X. If X can be solved in polynomial time, then Y can be solved in polynomial time. X is at least as hard as Y (with respect to polynomial time). Y is polynomial-time reducible to X. Suppose Y ≤P X. If Y cannot be solved in polynomial time, then X cannot be solved in polynomial time. 9.2 Problem Definitions Definition 9 - Circuit SAT. Definition 10. 13 Part II Data Structures 14 Chapter 10 Hashing 10.1 Introduction • Prime number to be used for hashing should be very large to avoid collisions. • Two keys hash to the same value (this is known as a collision). • This hash function involves all characters in the key and can generally be expected to PKeySize-1 distribute well (it computes i=0 Key(KeySize - i - 1) ×37i and brings the result into proper range). The code computes a polynomial function (of 37) by use of Horner’s rule. For instance, another way of computing hk = k0 + 37 × k1 + 372 k2 is by the formula hk = ((k2 ) × 7 + k1 ) × 37 + k0 . HornerâĂŹs rule extends this to an nth degree polynomial. /** * A hash routine for string objects. */ unsigned int hash( const string & key, int tableSize ){ unsigned int hashVal = 0; for( char ch : key ) hashVal = 37 * hashVal + ch; return hashVal % tableSize; } • The first strategy, commonly known as separate chaining, is to keep a list of all elements that hash to the same value. Might be preferable to avoid their use (since these lists are doubly linked and waste space) • We define the load factor, λ, of a hash table to be the ratio of the number of elements in the hash table to the table size. • The average length of a list is λ. The effort required to perform a search is the constant time required to evaluate the hash function plus the time to traverse the list. • In an unsuccessful search, the number of nodes to examine is λ on average. • A successful search requires that about 1 + ( λ2 ) links be traversed. • The general rule for separate chaining hashing is to make the table size about as large as the number of elements expected (in other words, let λ ≈ 1). If the load factor exceeds 1, we expand the table size by calling rehash. 15 Chapter 10. Hashing 10.1. Introduction Table 10.1: Empty Table 0 1 2 3 4 5 6 7 8 9 Table 10.2: Empty Table for Quadratic Probing 0 1 2 3 4 5 6 7 8 9 • Generally, the load factor should be below λ = 0.5 for a hash table that doesnâĂŹt use separate chaining. We call such tables probing hash tables. • Three common collision resolution strategies: – Linear Probing – Quadratic Probing – Double Hashing • Linear Probing – In linear probing, f is a linear function of i, typically f(i) = i. This amounts to trying cells sequentially (with wraparound) in search of an empty cell. Table 10.3 shows the result of inserting keys 89, 18, 49, 58, 69 into a hash table using the same hash function as before and the collision resolution strategy, f (i) = i. – 0 1 0 49 0 49 2 1 3 2 1 58 4 3 2 69 5 4 3 6 5 4 7 6 5 8 7 6 9 89 8 18 7 0 9 89 8 18 1 0 49 2 3 4 5 6 7 1 58 2 3 4 5 6 8 18 7 9 89 8 9 18 89 9 89 – Primary clustering, means that any key that hashes into the cluster will require several attempts to resolve the collision, and then it will add to the cluster. • Quadratic Probing: – f (i) = i2 – When 49 collides with 89, the next position attempted is one cell away. This cell is empty, so 49 is placed there. Next, 58 collides at position 8. Then the cell one away is tried, but another collision occurs. A vacant cell is found at the next cell tried, which is 22 = 4 away. 58 is thus placed in cell 2. – 0 0 49 0 49 1 2 3 1 2 1 2 58 4 3 5 4 3 69 6 5 4 7 6 5 8 7 6 9 89 8 18 7 8 18 – 16 0 9 89 9 89 1 0 49 2 1 3 4 5 6 7 2 58 3 4 5 6 8 18 7 9 89 8 9 18 89 Chapter 10. Hashing 10.2. ReHashing Table 10.3: Empty Table for Double Hashing 0 1 2 3 4 5 6 7 8 9 Theorem 1. If quadratic probing is used, and the table size is prime, then a new element can always be inserted if the table is at least half empty. • Double Hashing – For double hash- ing, one popular choice is f(i) = i.hash2 (x). – hash2 (x) must be never evaluated to 0. – hash2 (x) = R - (x mod R), with R = 7, a prime smaller than TableSize – The first collision occurs when 49 is inserted. hash2 (49) = 7 - 0 = 7, so 49 is inserted in position 6. hash2 (58) = 7 - 2 = 5, so 58 is inserted at location 3. Finally, 69 collides and is inserted at a distance hash2 (69) = 7 - 6 = 1 away. If we tried to insert 60 in position 0, we would have a collision. Since hash2 (60) = 7 - 4 = 3, we would then try positions 3, 6, 9, and then 2 until an empty spot is found. – 0 1 2 3 4 5 6 0 1 2 3 4 5 6 49 0 69 1 2 3 58 4 5 7 6 49 8 7 9 89 8 18 7 8 18 0 9 89 1 0 2 1 3 2 4 3 58 5 4 6 5 7 6 49 8 18 7 9 89 8 9 18 89 9 89 – If the table size is not prime, it is possible to run out of alternative locations prematurely. 10.2 ReHashing • As an example, suppose the elements 13, 15, 24, and 6 are inserted into a linear probing 0 1 2 3 4 5 6 hash table of size 7. The hash function is h(x) = x mod 7. 6 15 24 13 • If 23 is inserted into the table, the resulting table will be over 70 percent full. Because the table is so full, a new table is created. The size of this table is 17, because this is the first prime that is twice as large as the old table size. The new hash function is then h(x) = x mod 17. The old table is scanned, and elements 6, 15, 23, 24, and 13 are inserted into the new table. The resulting table: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 6 23 24 13 15 17 Part III Theory Of Computation 18 References [2], Wikipedia articles for definitions of Halting Problem, Recursive Sets etc., 19 Chapter 11 Finite Automaton Definition 11 - DFA,NFA, − N F A. M = (Q, Σ, δ, q0 , F ) Q : Set of states. In − N F A is also a state. Σ : Set of alphabets δ : Moving function q0 : Initial State F : Set of Final States. In NFA 2Q is the total no. of possible final states. In − N F A δ(q0 , 01) = − CLOSU RE(δ()δ(q0 , 0), 1)). Theorem 2. If L is accepted by an NFA, there exists an equivalent DFA accepting by L. Definition 12 - -closure. -closure denote the set of all vertices p such that there is a path from q to p labelled . Definition 13. If L is accepted by an NFA with transitions, then L is accepted by an NFA without transitions. Definition 14. i + ∞ i L∗ = ∪∞ i=0 L . L = ∪i=1 L Theorem 3. Let r be a regular expression. Then there exists an NFA with -transitions that accepts L(r). Definition 15 - Regular Language. If L is accepted by DFA, then L is denoted by a regular expression. k−1 k−1 ∗ k−1 k−1 k Rij = Rik (Rik ) Rkj ∪ Rij 0 Rij = {a|δ(qi , a) = qj }if i 6= j {a|δ(qi , a) = qj } ∪ {}if i = j k Rij is the set of all strings that take the finite automaton from state qi to state qj without going through any state numbered higher than k. 20 Chapter 12 Undecidability 12.1 Definitions Definition 16 - Undecidable Problems. An undecidable problem is a decision problem for which it is known to be impossible to construct a single algorithm that always leads to a correct yes-or-no answer. Definition 17 - Halting Problem. Given the description of an arbitrary program and a finite input, decide whether the program finishes running or will run forever. (Undecidable) Definition 18 - Recursive Sets. A set of natural numbers is called recursive, computable or decidable if there is an algorithm which terminates after a finite amount of time and correctly decides whether or not a given number belongs to the set. Definition 19 - Recursively Enumerable Sets. A set S of natural numbers is called recursively enumerable, computably enumerable, semidecidable, provable or Turing-recognizable if: there is an algorithm such that the set of input numbers for which the algorithm halts is exactly S. Example: Set of languages we can accept using a Turing machine. NOTE: There is no algorithm that decides the mebership of a particular string in a language. 21 Chapter 13 Turing Machines 13.1 Notation M = (Q, Σ, Γ, δ, q0 , B, F ) 1. Q: The finite set of states of the finite control. 2. Σ: The finite set of input symbols. 3. Γ: The complete set of tape symbols; Σ is always a subset of Γ. 4. δ: The transition function. The arguments of δ(q, X) are a state q and a tape symbol X. The value of δ(q, X), if it is defined, is a triple (p, Y, D ), where: (a) p is the next state, in Q. (b) Y is the symbol, in Γ , written in the cell being scanned, replacing whatever symbol was there. (c) D is a direction, either Lor R, standing for "left" or "right" respectively, and telling us the direction in which the head moves. 5. q0 : The start state, a member of Q, in which the finite control is found initially. 6. B: The blank symbol. This symbol is in Γ but not in E; i.e., it is not an input symbol. T he blank appears initially in all but the finite number of initial cells that hold input symbols. 7. F: The set of final or accepting states, a subset of Q. Use the string X1 X2 . . . Xi−1 qXi Xi+1 . . . Xn to represent an ID in which 1. q is the state of the Turing machine. 2. The tape head is scanning the ith symbol from the left. 3. X1 X2 . . . Xn the portion of the tape between the leftmost and the rightmost nonblank. As an exception, if the head is to the left of the leftmost nonblank or to the right of the rightmost nonblank, t hen some prefix or suffix of X1 X2 . . . Xn will be blank, and i will be 1 or n, respectively. 22 Chapter 13. Turing Machines 13.1. Notation A move M is defined as X1 X2 . . . Xi−1 qXi Xi+1 . . . Xn `M X1 X2 . . . Xi−2 pXi−1 Xi Xi+1 . . . Xn Figure 13.1: Turing machine accepting 0n 1n for n ≥ 1 23 Chapter 14 Regular Sets 14.1 Pumping Lemma Pumping Lemma is used to test whether a language is regular or not based on the property that languages obatined from regular languages are regular. If a language is regular, it is accepted by a DFA M = (Q, Σ, δ, q0 , F ) with some particular number of states, say n. Lemma 3.1 - Pumping Lemma. Let L be a regular set. Then there is a constant n such that if z is any word in L, and |z| > n, we may write z = uvw in such a way that |uv| ≤ n, |v| ≥ 1, and for all i > 0, uv i w is in L. Furthermore, n is no greater than the number of states of the smallest FA accepting L. Examples: 2 1. 0i isnotaregularlanguage. 2. L be the set of strings of 0’s and 1’s, beginning with a 1, whose value treated as a binary number is a prime. is not a regular language. Theorem 4. The regular sets are closed under union (L1 ∪ L2 ), concatenation (L1 .L2 ), and Kleene closure (L∗ ). Theorem 5. The class of regular sets is closed under complementation. That is, if L is a regular set and L ∈ Σ∗ , then σ ∗ − L is a regular set. Theorem 6. The regular sets are closed under intersection. 24 Part IV Computer Organization [3] 25 Chapter 15 Machine Instructions & Addressing Modes 15.1 Definitions Definition 20 - Instruction Set. Instruction set The vocabulary of commands understood by a given architecture. Definition 21 - Stored Program Concept. Stored-program concept The idea that instructions and data of many types can be stored in memory as numbers, leading to the stored program computer. Definition 22 - Word. Word The natural unit of access in a computer, usually a group of 32 bits; corresponds to the size of a register in the MIPS architecture. Definition 23 - Registers. Registers are limited number of special locations built directly in hardware. The size of a register in the MIPS architecture is 32 bits. Definition 24 - Data transfer Instructions. Data transfer instruction A command that moves data between memory and registers. Definition 25 - Address. Address A value used to delineate the location of a specific data element within a memory array. To access a word in memory, the instruction must supply the memory address. Memory is just a large, single-dimensional array, with the address acting as the index to that array, starting at 0. Definition 26 - Base Register. Definition 27 - Offset. Definition 28 - Alignment Restriction. Words must start at addresses that are multiples of 4(for MIPS). This requirement is called an alignment restriction. 26 Is the size of register = word size always? Interesting Point:The word size of an Architecture is often (but not always!) defined by the Size of the General Purpose Registers. Chapter 15. Machine Instructions & Addressing Modes 15.2. Design Principles Definition 29 - Little & Big Endian. Computers divide into those that use the address of the leftmost or “big end”byte as the word address versus those that use the rightmost or “little end”byte. Definition 30 - Instruction Format. Instruction format A form of representation of an instruction composed of fields of binary numbers. Example: add $t0, $s1, $s2 t0 = 8(reg. no.) s0 = 16(reg.no) 0 17 18 8 0 32 000000 10001 10010 01000 00000 100000 6 bits 5 bits 5 bits 5 bits 5 bits 6 bits op rs rt rd shamt funct 1. op Basic operation of the instruction, traditionally called the opcode. 2. rs The first register source operand. 3. rt The second register source operand. 4. rd The register destination operand. It gets the result of the operation. 5. shamt Shift amount. (Section 2.5 explains shift instructions and this term; it will not be used until then, and hence the field contains zero.) 6. funct Function. This field selects the specific variant of the operation in the op field and is sometimes called the function code. Definition 31 - Opcode. An opcode (operation code) is the portion of a machine language instruction that specifies the operation to be performed. Definition 32 - Program Counter. Program counter (PC) The register containing the address of the instruction in the program being executed. Definition 33 - Addressing Modes. Addressing mode One of several addressing regimes delimited by their varied use of operands and/or addresses. 15.2 Design Principles 1. Simplicity favors regularity. 2. Samller is faster A very large number of registers may increase the clock cycle time simply because it takes electronic signals longer when they must travel farther. 3. Make common case fast Constant operands occur frequently, and by including constants inside arithmetic instructions, they are much faster than if constants were loaded from memory. 4. Good design demands good compromises. Different kinds of instruction formats for different kinds of instructions. 27 Chapter 15. Machine Instructions & Addressing 15.3. Machine Modes Instructions [3, p.67,90] 15.3 Machine Instructions [3, p.67,90] 1. Load lw: copies data from memory to a register. Format: lw register, mem loc. of data(num,reg. where base add. is stored) item Store sw: copies data from register to memory. Format: sw register, mem loc. of data(num,reg. where base add. is stored) 2. Add add: 3. Add Immediate addi: 4. Subtract sub: lw $t0,8($s3) # Temporary reg $t0 gets A[8] 28 Chapter 16 Pipelining Definition 34 - Pipelining. Pipelining is an implementation technique in which multiple instructions are overlapped in execution. Pipelining doesn’t decrease the time to finish a work but increases the throughput in a given time. MIPS instrructions takes 5 steps: 1. Fetch instruction from memory. 2. Read registers while decoding the instruction. The format of MIPS instructions allows reading and decoding to occur simultaneously. 3. Execute the operation or calculate an address. 4. Access an operand in data memory. 5. Write the result into a register. Time between instructionsnonpipelined Number of pipe stages Speedup due to pipelining ≈ No. of pipe stages. 5 stage piplelined process is 5 times faster. (Only for perfectly balanced processes). But process involve overhead. So speedup will be quite less. Time between instructionspipelined = 16.1 Designing Instruction sets for pipelining 1. All MIPS instructions are the same length. This restriction makes it much easier to fetch instructions in the first pipeline stage and to decode them in the second stage. 2. MIPS has only a few instruction formats, with the source register fields being located in the same place in each instruction. This symmetry means that the second stage can begin reading the register file at the same time that the hardware is determining what type of instruction was fetched. 3. Memory operands only appear in loads or stores in MIPS. This restriction means we can use the execute stage to calculate the memory address and then access memory in the following stage. 4. Operands must be aligned in memory. Hence, we need not worry about a single data transfer instruction requiring two data memory accesses; the requested data can be transferred between processor and memory in a single pipeline stage. 29 Chapter 16. Pipelining 16.2 Pipeline Hazards 16.2.1 Structural Hazards 16.2. Pipeline Hazards It means that the hardware cannot support the combination of instructions that we want to execute in the same clock cycle. 16.2.2 Data Hazards Data hazards occur when the pipeline must be stalled because one step must wait for another to complete. For example, suppose we have an add instruction followed immediately by a subtract instruction that uses the sum ($s0): add \$s0, \$t0, \$t1 sub \$t2, \$s0, \$t3 Without intervention, a data hazard could severely stall the pipeline. Solution Forwarding also called bypassing. A method of resolving a data hazard by retrieving the missing data element from internal buffers rather than waiting for it to arrive from programmervisible registers or memory. Load-use data hazard A specific form of data hazard in which the data requested by a load instruction has not yet become available when it is requested. Pipeline stall Also called bubble. A stall initiated in order to resolve a hazard. For example If first instruction were load instead of add then $s0 would not be available until 4th stage resulting in a pipeline stall. 16.2.3 Hazards 30 Chapter 17 Arithmetic for Computers 17.1 Definitions Definition 35 - LSB & MSB. LSB : right-most bit .MSB : left-most bit. 31 Chapter 18 Speedup Definition 36 - Amdahl’s Law. content... 32 Part V First Order Logic 33 Chapter 19 Propositional Logic 19.1 Introduction • Simple Statement: A simple statement is one that does not contain any other statement as a component. Ex., Fast foods tend to be unhealthy. • Compound Statement: A compound statement is one that contains at least one simple statement as a component. Ex., Dianne Reeves sings jazz, and Christina Aguilera sings pop. • Operator ∼ ∧ ∨ → ↔ Name tilde dot wedge horseshoe triple bar Logical function negation conjunction disjunction implication equivalence 34 Used to translate not, it is not the case that and, also, moreover or, unless if . . . then . . . , only if if and only if Chapter 19. Propositional Logic Symbol ∼ 19.2. Truth Table Words doesnot not the case false that and ∧ • 1 → but however if . . .,then . . . (S → P) . . . if . . . (P → S) . . . only if . . . (S → P) . . . provided that . . . (P → S) . . . provided that . . . (P → S) . . . provided that . . . (S → P) Either . . . or ∨ Unless ↔ . . . if and only if . . . (S → P) . . . sufficient and necessary condition that . . . (S → P) Usage Rolex does not make computers. It is not the case that Rolex makes computers. It is false that Rolex makes computers. Tiff any sells jewelry, and Gucci sells cologne. Tiff any and Ben Bridge sell jewelry. Tiff any sells jewelry, but Gucci sells cologne. Tiff any sells jewelry; however, Gucci sells cologne. If Purdue raises tuition, then so does Notre Dame. Notre Dame raises tuition if Purdue does. Purdue raises tuition only if Notre Dame does. Cornell cuts enrollment provided that Brown does. Cornell cuts enrollment on condition that Brown does. BrownâĂŹs cutting enrollment implies that Cornell does. Aspen allows snowboards or Telluride does. Either Aspen allows snowboards or Telluride does. Aspen allows snowboards unless Telluride does. Unless Aspen allows snowboards, Telluride does. JFK tightens security if and only if OâĂŹHare does. JFKâĂŹs tightening security is a suffi cient and necessary condition for OâĂŹHareâĂŹs doing so. • The → symbol is also used to translate statements phrased in terms of sufficient conditions and necessary conditions. Event A is said to be a sufficient condition for event B whenever the occurrence of A is all that is required for the occurrence of B. On the other hand, event A is said to be a necessary condition for event B whenever B cannot occur without the occurrence of A. For example, having the flu is a sufficient condition for feeling miserable, whereas having air to breathe is a necessary condition for survival. Other things besides having the flu might cause a person to feel miserable, but that by itself is suffi cient; other things besides having air to breathe are required for survival, but without air survival is impossible. In other words, air is necessary. • To translate statements involving sufficient and necessary conditions into symbolic form, place the statement that names the sufficient condition in the antecedent of the conditional and the statement that names the necessary condition in the consequent. 1. HiltonâĂŹs opening a new hotel is a sufficient condition for MarriottâĂŹs doing so. H→M 2. HiltonâĂŹs opening a new hotel is a necessary condition for MarriottâĂŹs doing so. M→H Not either A or B. Either not A or not B. Not both A and B. Both not A and not B. • 19.2 Truth Table p • NEGATION 0 1 1S ∼(A ∨ B) ∼A ∨ ∼B ∼(A ∧ B) ∼A ∧ ∼B p 0 • AND 0 1 1 ∼p 1 0 - First Part, P - Second Part 35 q 0 1 0 1 p∧q 0 0 0 1 Chapter 19. Propositional Logic p 0 • OR 0 1 1 p 0 • IF 0 1 1 q 0 1 0 1 q 0 1 0 1 19.2. Truth Table p∨q 0 1 1 1 Column under Statement classifimain operator cation all true tautologous (logically true) • all false self-contradictory (logically false) at least one true, at contingent least one false p→q 1 1 0 1 p 0 • IF AND ONLY IF 0 1 1 • If all Premises are true and conclusion is false, then argument is invalid. • Assume argument invalid (true premises, false conclusion) q 0 1 0 1 p↔q 1 0 0 1 1. Contradiction → Argument is valid 2. No Contradiction → Argument is as assumed (i.e., invalid) • Assume statements consistent (assume all of them true) • To find truth value of long propositions (B∧C)→(E→A) (T∧T)→(F→T) T → T 1. Contradiction → Statements is consistent 2. No Contradiction → Statements is as assumed (i.e., inconsistent) T 36 Chapter 19. Propositional Logic 19.3 19.3. Rules Rules Rules of Implication Name Premises1 Premises2 Modus Ponens(MP) p→q p Modus Tollens(MT) p→q ∼q Hypothetical Syllogism(HS) p→q q→r Disjunctive Syllogism(DS) p∨q ∼p Constructive Dilemma(CD) (p → q) ∧ (r → s) p∨r Destructive Dilemma(DD) (p → q) ∧ (r → s) ∼ q ∨ ∼ s Conjunction p q Addition p Simplification p∧q Rules of Replacement ∼(p ∧ q) (∼p ∨ ∼q) De-Morgan’s Rule ∼(p ∨ q) (∼p ∧ ∼q) (p ∨ q) (q ∨ p) Commutative (p ∧ q) (q ∧ p) p ∨ (q ∨ r) (p ∨ q) ∨ r Associative p ∧ (q ∧ r) (p ∧ q) ∧ r p ∧ (q ∨ r) (p ∧ q) ∨ (p ∧ r) Distributive p ∨ (q ∧ r) (p ∨ q) ∧ (p ∨ r) Double-Negation p ∼∼p Transposition p→q ∼ q →∼ p Exportation (p ∧ q) → r p → (q → r) Material Implication p→q ∼p∧q p∧p Tautology p p∨p (p → q) ∧ (q → p) Material Equivalence p↔q (p ∧ q) ∨ (∼ p ∧ ∼ q) 19.4 Conclusion q ∼p p→r q q∨s ∼p∨∼r p∧q p∨q p Conditional Proof Conditional proof is a method for deriving a conditional statement (either the conclusion or some intermediate line) that off ers the usual advantage of being both shorter and simpler to use than the direct method. Let us suppose, for a moment, that we do have A. We could then derive B ∧ C from the first premise via modus ponens. Simplifying this expression we could derive B, and from this we could get B ∨ D via addition. E would then follow from the second premise via modus ponens. In other words, if we assume that we have A, we can get E. But this is exactly what the conclusion says. Thus, we have just proved that the conclusion follows from the premises. 1. A → (B ∧ C) 2.(B ∨ D) → E / A → E 3. A ACP 4. B ∧ C 1,3 MP 5. B 4 Simp 6. B ∨ D 5 Add 7. E 2,6 MP 8. A → E 3-7 CP 37 Chapter 19. Propositional Logic 19.5 19.5. Indirect Proof Indirect Proof It consists of assuming the negation of the statement to be obtained, using this assumption to derive a contradiction, and then concluding that the original assumption is false. Th is last step, of course, establishes the truth of the statement to be obtained. Th e following proof sequence uses indirect proof to derive the conclusion: 1. (A ∨ B) → (C ∧ D) 2. C → ∼D / ∼A 3. A AIP 4. A ∨ B 3 Add 5. C ∧ D 1,4 MP 6. C 5 Simp 7. ∼D 2,6 MP 8. D ∧ C 5 Com 9. D 8 Simp 10. D ∧ ∼D 7,9 Conj 11. ∼A 3-10 IP 38 Chapter 20 Predicate Logic 20.1 Introduction 39 Part VI Probability 40 Syllabus: Conditional Probability; Mean, Median, Mode and Standard Deviation; Random Variables; Distributions; uniform, normal, exponential, Poisson, Binomial. 41 Chapter 21 Definitions 21.1 Definitions 1. Probability Density Function Z Pr[a ≤ X ≤ b] = b fX (x) dx a . 2. Cumulative Distributive Function Z x FX (x) = fX (u) du −∞ fX (x) = . d FX (x) dx FX (x) = P(X ≤ x) P(a < X ≤ b) = FX (b) − FX (a) 3. Chebysev’s Inequality 1 k2 where X be a random variable with finite expected value µ and finite non-zero variance σ2 . k > 0 Pr(|X − µ| ≥ kσ) ≤ 4. Let x ¯ and s be the sample mean and sample standard deviation of the data set consisting of the data x1 , . . . , xn where s > 0. Let Sk = i, 1 ≤ i ≤ n : |xi − x ¯| < ks and let N (Sk ) be the number of elements in the set Sk . Then, for any k ≥ 1, N (Sk ) n−1 1 ≥1− ≥1− 2 2 n nk k 5. One sided Chebysev’s Inequality. For k > 0 N (k) 1 ≤ n 1 + k2 42 Chapter 21. Definitions 21.1. Definitions 6. Weak Law of Large Numbers Xn = 1 (X1 + · · · + Xn ) n converges to the expected value for Xn → µ n→∞ where X1 , X2 , . . . is an infinite sequence of independent and identically distributed random variables. 7. Markov’s Equality If X is a random variable that takes only nonnegative values, then for any value a > 0 E[X] P (x ≥ a) ≤ a 8. Chebysev’s Inequality If X is a random variable with mean µ and variance σ 2 , then for any value k > 0 σ2 P (|X − µ|) ≤ 2 k 1 P (|X − µ| ≥ kσ) ≤ 2 k 9. Baye’s Theorem Assume come an event E to happen with A as well as B. P (E) = = P (E ∩ A) + P (E ∩ B) E E P (A)P + P (B)P A B Given that E has already happened: A P = E = P (A ∩ E) P (E) E P (A)P A E E P (A)P + P (B)P A B 43 (21.1) (21.2) (21.3) (21.4) Chapter 22 Probability 22.1 Terms • Independent Events: Since P (E|F ) = P (E ∩ F ) , we see that E is independent of F if P (F ) P (E ∩ F ) = P (E)P (F ) • The three events E, F, and G are said to be independent if P (E ∩ F ∩ G) = P (E)P (F )P (G) P (E ∩ F ) = P (E)P (F ) P (E ∩ G) = P (E)P (G) P (F ∩ G) = P (F )P (G) 22.2 Propositions • If E and F are independent, then so are E and Fc . • Mean : X = Distribution, P (Xi ) = Probability of Xi , Xi is the distribution E [X] = n X Xi P (Xi ) i=0 • Median : • Mode : • Standard Deviation : 22.3 Conditional Probability • Conditional probability of E given that F has occurred, is denoted by P (E|F ) • P (E|F ) = P (E ∩ F ) . P (F ) Defined only when P(F) > 0 and hence P(E|F) is defined only when P(F) > 0. 44 Chapter 22. Probability 22.4. Mean, Median, Mode and Standard Deviation • Suppose that one rolls a pair of dice. The sample space S of this experiment can be taken to be the following set of 36 outcomes S = (i, j), i = 1, 2, 3, 4, 5, 6, j = 1, 2, 3, 4, 5, 6 where we say that the outcome is (i, j) if the first die lands on side i and the second on side j. Suppose further that we observe that the first die lands on side 3. Then, given this information, what is the probability that the sum of the two dice equals 8? Answer: Given that the initial die is a 3, there can be at most 6 possible outcomes of our experiment, namely, (3, 1), (3, 2), (3, 3), (3, 4), (3, 5), and (3, 6). In addition, because each of these outcomes originally had the same probability of occurring, they should still have equal probabilities. That is, given that the first die is a 3, then the (conditional) probability of each of the outcomes (3, 1), (3, 2), (3, 3), (3, 4), (3, 5), (3, 6) is 16 , whereas the (conditional) probability of the other 30 points in the sample space is 0. Hence, the desired probability will be 16 . 22.3.1 Bayes Formula • Let E and F be events. We may express E as E = E ∩ F ∪ E ∩ Fc for, in order for a point to be in E, it must either be in both E and F or be in E but not in F. • As E ∩ F & E ∩ F c are clearly mutually exclusive, P (E) (22.1) = P (E ∩ F ) + P (E ∩ F c ) c c = P (E|F )P (F ) + P (E ∩ F )P (F ) c = P (E|F )P (F ) + P (E ∩ F )[1 − P (F )] 22.4 Mean, Median, Mode and Standard Deviation • Properties of Mean: 1. General Formula for continuous distributions: Z ∞ E [X] = xf (x)dx −∞ 2. General Formula for continuous distributions: Z ∞ E [g(X)] = g(x)f (x)dx −∞ 3. E [c] = c where c is a constant 4. Linearity: E [X] ≤ E [Y ] 5. General Linearity: E [aX + bY + c] = aE [X] + bE [Y ] + c 6. E [X|Y = y] = X x 7. E [X] = E [E [X|Y ]] 45 x.P (X = x|Y = y) (22.2) (22.3) Chapter 22. Probability 22.4. Mean, Median, Mode and Standard Deviation 8. |E [X]| ≤ E [|X|] 9. Cov(X, Y) = E [XY ] − E [X] E [Y ] 10. if Cov(X, Y) = 0: X & Y are are said to be uncorrelated (independent variables are a notable case of uncorrelated variables). 11. Variance = E X 2 − (E [X])2 • Properties of Median: 1. Value of m for discrete distributions: 1 2 1 P (X ≤ m) ≥ 2 2. Value of m for continuous distributions: Z 1 dF (x) ≥ 2 (−∞,m] Z 1 dF (x) ≥ 2 (m,∞] P (X ≥ m) ≥ where F (x) is the cumulative distriution function. 3. All three measures have the following property: If the random variable (or each value from the sample) is subjected to the linear or affine transformation which replaces X by aX+b, so are the mean, median and mode. 4. However, if there is an arbitrary monotonic transformation, only the median follows; for example, if X is replaced by exp(X), the median changes from m to exp(m) but the mean and mode won’t. 5. In continuous unimodal distributions the median lies, as a rule of thumb, between the mean and the mode, about one third of the way going from mean to mode. In a formula, median ≈ (2 × mean + mode)/3. This rule, due to Karl Pearson, often applies to slightly non-symmetric distributions that resemble a normal distribution, but it is not always true and in general the three statistics can appear in any order. √ 6. or unimodal distributions, the mode is within 3 standard deviations of the mean, and the root mean square deviation about the mode is between the standard deviation and twice the standard deviation. • Properties of Standard Deviation: 1. Represented by σ. 2. σ = = (22.4) p E [(X − µ)2 ] p E [X 2 ] − (E [X])2 (22.5) 3. σ 2 = Variance 4. Continuous Random Variable: The standard deviation of a continuous realvalued random variable X with probability density function p(x) is: sZ Z (x − µ)2 p(x)dx, σ= X where µ = xp(x)dx X 46 Chapter 22. Probability 22.5. Distributions • Discrete Raqndom Variable: In the case where X takes random values from a finite data set x1 , x2 , . . . , xN , let x1 have probability p1 , x2 have probability p2 , . . . , xN have probability pN . In this case, the standard deviation will be v uN N uX X σ=t pi (xi − µ)2 , where µ = pi xi i=1 22.5 22.5.1 i=1 Distributions Bernoulli • Suppose that a trial, or an experiment, whose outcome can be classified as either a "success" or as a "failure" is performed. If we let X = 1 when the outcome is a success and X = 0 when it is a failure, then the probability mass function of X is given by (22.6) 1−p P (X = 0) = P (X = 1) = p (22.7) where p, 0 ≤ p ≤ 1, is the probability that the trial is a "success." 22.5.2 HyperGeometric A bin contains N + M batteries, of which N are of acceptable quality and the other M are defective. A sample of size n is to be randomly chosen (without replacements) in the sense that the set of sampled batteries is equally likely to be any of the N +M subsets of size n. If we let n X denote the number of acceptable batteries in the sample, then M N P (X = i) = 22.5.3 Uniform f (x) = 22.5.4 0 Normal −(x−µ)2 1 e 2σ2 2πσ Exponential f (x) = 22.5.6 if α ≤ x ≤ β otherwise 1 β−α f (x) = √ 22.5.5 i n−i N +M n λe−λx 0 if x ≥ 0 if x < 0 Poisson P (X = i) = e−λ 22.5.7 λi i! Binomial n k P (X = i) = p (1 − p)n−k k 47 i≤n Chapter 22. Probability 22.5.8 22.5. Distributions Summary 48 49 (B(n, p)) Binomial Exponential Normal Poisson Bernoulli Notation Name Bernoulli np Mean p Mode b(n + 1)pc b(n + 1)p − 1c Median bnpc dnpe Table 22.1: Probability Formula Table np(1 - p) Variance Chapter 22. Probability 22.5. Distributions Bernoulli Binomial Exponential Normal Poisson Name Probability Generating Function(PMF) n G(z) = [(1 − p) + pz] Probability Generating Function(PGF) (1 − p + pet )n Moment Generating Function(MGF) Table 22.2: Probability Formula Table (1 − p + peit )n Cumulative Distributive Function(CDF) Chapter 22. Probability 22.5. Distributions 50 Part VII HTML 51 Chapter 23 Basic Commands 23.1 Basics 1. The DOCTYPE declaration defines the document type. 2. The text between and describes the web page. 3. The text between and is the visible page content. 4. The text betweenand
is displayed as a heading. 5. The text betweenand
is displayed as a paragraph. 6. HTML stands for Hyper Text Markup Language. 7. HTML is a markup language. 8. HTML tags are not case sensitive. 9. HTML elements with no content are called empty elements. 10. HTML elements can have attributes. 11. Attributes provide additional information about an element. 12. Attributes are always specified in the start tag. 13. Attributes come in name/value pairs like: name="value". 14. Attribute names and attribute values are case-insensitive(lower case recommended). 15.tag creates a horizontal line in an HTML page. 23.2 Tags 1.
...
2.
3. 4. 5.
is an empty element without a closing tag 52 Chapter 23. Basic Commands 23.3 23.3. Tags Reference Tags Reference Tag Basic
to
Formatting