1. In computational complexity theory, a complexity class is a set of problems of related resource-based complexity.
2. A typical complexity class has a definition of the form: the set of problems that can be solved by an abstract machine M using O(f(n)) of resource R, where n is the size of the input.
3. Complexity classes are concerned with the rate of growth of the requirement in resources as the input n increases.
4. Cobham's thesis states that polynomial time is a synonym for "tractable", "feasible", "efficient", or "fast".
5. Any problem that cannot be contained in P is not feasible, but if a real-world problem can be solved by an algorithm existing in P, generally such an algorithm will eventually be discovered.
6. In similar spirit, NC complexity class can be thought to capture problems "effectively solvable" on a parallel computer.
7. Typical input lengths that users and programmers are interested in are approximately between 100 and 1,000,000. Consider an input length of n=100 and a polynomial algorithm whose running time is n2. This is a typical running time for a polynomial algorithm.
8. A typical CPU will be able to do approximately 10^9 operations per second (this is extremely simplified).
9. an algorithm that runs in exponential time might have a running time of 2^n.
10. Mathematically speaking, for big enough inputs, any polynomial time algorithm will beat any exponential time algorithm, and by arbitrarily large amounts.
11. There are many lines of objection to Cobham's thesis. The thesis essentially states that "P" means "easy, fast, and practical," while "not in P" means "hard, slow, and impractical."
12. Mathematical symbols can designate numbers (constants), variables, operations, functions, punctuation, grouping, and other aspects of logical syntax.
14. Problems are said to be tractable if they can be solved in terms of a closed-form expression
15. the class of expressions considered to be analytic expressions tends to be wider than that for closed-form expressions. In particular, special functions such as the Bessel functions and the gamma function are usually allowed, and often so are infinite series and continued fractions. On the other hand, limits in general, and integrals in particular, are typically excluded.
16. If a is an algebraic number different of 0 and 1, and b an irrational algebraic number, then all the values of ab are transcendental numbers (that is, not algebraic).
17. In mathematics, the logarithm is the inverse operation to exponentiation.
18. Trigonometric functions are important in the study of triangles and modeling periodic phenomena, among many other applications.
19. Semantics is the study of meaning. Formal semantics is about attaching meaning to expressions.
20. Algebra studies two main families of equations: polynomial equations and, among them, linear equations. Polynomial equations have the form P(x) = 0, where P is a polynomial. Linear equations have the form a(x) + b = 0, where a is a linear function and b is a vector. To solve them, one uses algorithmic or geometric techniques, coming from linear algebra or mathematical analysis.
21. The P versus NP problem is a major unsolved problem in computer science. Informally speaking, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer
22. When employed in industrial contexts, machine learning methods may be referred to as predictive analytics or predictive modelling.
23. In essence, a Turing machine is imagined to be a simple computer that reads and writes symbols one at a time on an endless tape by strictly following a set of rules.
24. A tuple is a finite ordered list of elements. In mathematics, an n-tuple is a sequence (or ordered list) of n elements, where n is a non-negative integer.
25. Turing machines, first described by Alan Turing in (Turing 1937), are simple abstract computational devices intended to help investigate the extent and limitations of what can be computed.
26. Turing was interested in the question of what it means for a task to be computable, which is one of the foundational questions in the philosophy of computer science. Intuitively a task is computable if it is possible to specify a sequence of instructions which will result in the completion of the task when they are carried out by some machine. Such a set of instructions is called an effective procedure, or algorithm, for the task. The problem with this intuition is that what counts as an effective procedure may depend on the capabilities of the machine used to carry out the instructions. In principle, devices with different capabilities may be able to complete different instruction sets, and therefore may result in different classes of computable tasks
27. Turing proposed a class of devices that came to be known as Turing machines. These devices lead to a formal notion of computation that we will call Turing-computability. A task is Turing computable if it can be carried out by some Turing machine.
28. Turing machines are not physical objects but mathematical ones
29. A Turing machine is a kind of state machine. At any time the machine is in any one of a finite number of states. Instructions for a Turing machine consist in specified conditions under which the machine will transition between one state and another.
30. Even when a problem is decidable and thus computationally solvable in principle, it may not be solvable in practice if the solution requires an inordinate amount of time or memory.
31. In worst-case analysis, the form we consider here, we consider the longest running time of all inputs of a particular length. In average-case analysis, we consider the average of all the running times of inputs of a particular length.
32. In one convenient form of estimation, called asymptotic analysis, we seek to understand the running time of the algorithm when it is run on large inputs. We do so by considering only the highest order term of the expression for the running time of the algorithm, disregarding both the coefficient of that term and any lower order terms, because the highest order term dominates the other terms on large inputs.
33. the expression 0(1) represents a value that is never more than a fixed constant.
34. any language that can be decided in o(n log n) time on a single-tape Turing machine is regular,
35. we exhibited a two-tape TM M3 that decides A in 0(n) time. Hence the time complexity of A on a single-tape TM is 0(n log n) and on a two-tape TM it is 0(n). Note that the complexity of A depends on the model of computation selected.
36. This discussion highlights an important difference between complexity theory and computability theory. In computability theory, the Church-Turing thesis implies that all reasonable models of computation are equivalent-that is, they all decide the same class of languages. In complexity theory, the choice of model affects the time complexity of languages. Languages that are decidable in, say, linear time on one model aren't necessarily decidable in linear time on another.
37. In complexity theory, we classify computational problems according to their time complexity.
38. For our purposes, polynomial differences in running time are considered to be small, whereas exponential differences are considered to be large.
39. Polynomial time algorithms are fast enough for many purposes, but exponential time algorithms rarely are useful.
40. Exponential time algorithms typically arise when we solve problems by exhaustively searching through a space of solutions, called brute-force search. Sometimes, brute-force search may be avoided through a deeper understanding of a problem, which may reveal a polynomial time algorithm of greater utility.
41. All reasonable deterministic computational models are polynomially equivalent.
42. our aim is to present the fundamental properties of computation, rather than properties of Turing machines or any other special model.
43. P is invariant for all models of computation that are polynomially equivalent to the deterministic single-tape Turing machine, and
44. P roughly corresponds to the class of problems that are realistically solvable on a computer.
45. When we present a polynomial time algorithm, we give a high-level description of it without reference to features of a particular computational model. Doing so avoids tedious details of tapes and head motions. We need to follow certain conventions when describing an algorithm so that we can analyze it for polynomiality
46. We describe algorithms with numbered stages. The notion of a stage of an algorithm is analogous to a step of a Turing machine, though of course, implementing one stage of an algorithm on a Turing machine, in general, will require many Turing machine steps.
47. When we analyze an algorithm to show that it runs in polynomial time, we need to do two things. First, we have to give a polynomial upper bound (usually in big-O notation) on the number of stages that the algorithm uses when it runs on an input of length n. Then, we have to examine the individual stages in the description of the algorithm to be sure that each can be implemented in polynomial time on a reasonable deterministic model.
48. When both tasks have been completed, we can conclude that the algorithm runs in polynomial time because we have demonstrated that it runs for a polynomial number of stages, each of which can be done in polynomial time, and the composition of polynomials is a polynomial.
49. We continue to use the angle-bracket notation <.>to indicate a reasonable encoding of one or more objects into a string, without specifying any particular encoding method.
50. we can avoid brute-force search in many problems and obtain polynomial time solutions.
51. However, attempts to avoid brute force in certain other problems, including many interesting and useful ones, haven't been successful, and polynomial time algorithms that solve them aren't known to exist.
52. One remarkable discovery concerning this question shows that the complexities of many problems are linked. A polynomial time algorithm for one such problem can be used to solve an entire class of problems.
53. verifying the existence of a Hamiltonian path may be much easier than determining its existence.
54. a polynomial time algorithm for testing whether a number is prime or composite was discovered, but it is considerably more complicated than the preceding method for verifying compositeness.
55. NP is the class of languages that have polynomial time verifiers.
56. The term NP comes from nondeterministic polynomial time and is derived from an alternative characterization by using nondeterministic polynomial time Turing machines.
57. Problems in NP are sometimes called NP-problems.
58. We show how to convert a polynomial time verifier to an equivalent polynomial time NTM and vice versa. The NTM simulates the verifier by guessing the certificate. The verifier simulates the NTM by using the accepting branch as the certificate.
59. The class NP is insensitive to the choice of reasonable nondeterministic computational model because all such models are polynomially equivalent.
60. When describing and analyzing nondeterministic polynomial time algorithms, we follow the preceding conventions for deterministic polynomial time algorithms. Each stage of a nondeterministic polynomial time algorithm must have an obvious implementation in nondeterministic polynomial time on a reasonable nondeterministic computational model. We analyze the algorithm to show that every branch uses at most polynomially many stages.
61. Verifying that something is not present seems to be more difficult than verifying that it is present.
62. We make a separate complexity class, called coNP, which contains the languages that are complements of languages in NP. We don't know whether coNP is different from NP.
63. As we have been saying, NP is the class of languages that are solvable in polynomial time on a nondeterministic Turing machine, or, equivalently, it is the class of languages whereby membership in the language can be verified in polynomial time. P is the class of languages where membership can be tested in polynomial time.
64. where we loosely refer to polynomial time solvable as solvable "quickly."
65. P = the class of languages for which membership can be decided quickly.
66. NP = the class of languages for which membership can be verified quickly.
67. The power of polynomial verifiability seems to be much greater than that of polynomial decidability. But, hard as it may be to imagine, P and NP could be equal. We are unable to prove the existence of a single language in NP that is not in P
68. The question of whether P = NP is one of the greatest unsolved problems in theoretical computer science and contemporary mathematics.
69. The question of whether P = NP is one of the greatest unsolved problems in theoretical computer science and contemporary mathematics.
70. The best method known for solving languages in NP deterministically uses exponential time.
71. They discovered certain problems in NP whose individual complexity is related to that of the entire class If a polynomial time algorithm exists for any of these problems, all problems in Nl' would be polynomial time solvable. These problems are called NP-complete.
72. On the theoretical side, a researcher trying to show that P is unequal to NP may focus on an NP complete problem. If any problem in NP requires more than polynomial time, an NP-complete one does. Furthermore, a researcher attempting to prove that P equals NP only needs to find a polynomial time algorithm for an NP-complete problem to achieve this goal.
73. Even though we may not have the necessary mathematics to prove that the problem is unsolvable in polynomial time, we believe that P is unequal to NP, so proving that a problem is NP-complete is strong evidence of its nonpolynomiality.
74. A Boolean formula is satisfiable if some assignment of Os and is to the variables makes the formula evaluate to 1.
75. If one language is polynomial time reducible to a language already known to have a polynomial time solution, we obtain a polynomial time solution to the original language,
76. Showing that SAT is in NP is easy, and we do so shortly. The hard part of the proof is showing that any language in NP is polynomial time reducible to SAT.
77. NP-completeness can be proved with a polynomial time reduction from a language that is already known to be NP-complete.
78. If you seek a polynomial time algorithm for a new NP-problem, spending part of your effort attempting to prove it NP-complete is sensible because doing so may prevent you from working to find a polynomial time algorithm that doesn't exist.
79. Our general strategy is to exhibit a polynomial time reduction from 3SAT to the language in question, though we sometimes reduce from other NP-complete languages when that is more convenient.
80. To show that VERTEX-COVER is NP-complete we must show that it is in NP and that all NP-problems are polynomial time reducible to it.
81. Time and space are two of the most important considerations when we seek practical solutions to many computational problems.
82. NP-complete languages as representing the most difficult languages in NP.
83. Complete problems are important because they are examples of the most difficult problems in a complexity class
84. A complete problem is most difficult because any other problem in the class is easily reduced into it, so if we find an easy way to solve the complete problem, we can easily solve all other problems in the class.
85. The reduction must be easy, relative to the complexity of typical problems in the class, for this reasoning to apply. If the reduction itself were difficult to compute, an easy solution to the complete problem wouldn't necessarily yield an easy solution to the problems reducing to it.
86. Whenever we define complete problems for a complexity class, the reduction model must be more limited than the model used for defining the class itself.
87. To show that TQBF is in PSPACE we give a straightforward algorithm that assigns values to the variables and recursively evaluates the truth of the formula for those values. From that information the algorithm can determine the truth of the original quantified formula.
88. To show that every language A in PSPACE reduces to TQBF in polynomial time, we begin with a polynomial space-bounded Turing machine for A. Then we give a polynomial time reduction that maps a string to a quantified Boolean formula X that encodes a simulation of the machine on that input. The formula is true iff the machine accepts.
89. For the purposes of this section, a game is loosely defined to be a competition in which opposing parties attempt to achieve some goal according to prespecified rules.
90. Games appear in many forms, from board games such as chess to economic and war games that model corporate or societal conflict.
91. To illustrate the correspondence between games and quantifiers, we turn to an artificial game called theformula game
92. We say that Player E has a winning strategy for this game. A player has a winning strategy for a game if that player wins when both sides play optimally.
93. To prove that GG is PSPACE-hard, we give a polynomial time reduction from FORMULA-GAME to GG.
94. The only space required by this algorithm is for storing the recursion stack. Each level of the recursion adds a single node to the stack, and at most m levels occur, where m is the number of nodes in G. Hence the algorithm runs in linear space.
95. Sublinear space algorithms allow the computer to manipulate the data without storing all of it in main memory
96. Logarithmic space is just large enough to solve a number of interesting computational problems, and it has attractive mathematical properties such as robustness even when machine model and input encoding method change. Pointers into the input may be represented in logarithmic space, so one way to think about the power of log space algorithms is to consider the power of a fixed number of input pointers.
97. Most computer-aided proofs to date have been implementations of large proofs-by-exhaustion of a mathematical theorem. The idea is to use a computer program to perform lengthy computations, and to provide a proof that the result of these computations implies the given theorem.
98. Boolean formula is satisfiable if some assignment of Os and is to the variables makes the formula evaluate to 1.
99. The satisfiability problem is to test whether a Boolean formula is satisfiable
100. 3cnf-formnula if all the clauses have three literals,
101. In a satisfiable cnf-formula, each clause must contain at least one literal that is assigned 1.102.
Theoretical CS:
Theory
Definition, Theorems, Lemma, Corollary
Formula
Methodology
Complexity
Topics
Research
1. P is a complexity class that represents the set of all decision problems that can be solved in polynomial time. That is, given an instance of the problem, the answer yes or no can be decided in polynomial time.
2. Decision problem: A problem with a yes or no answer.
3. NP is a complexity class that represents the set of all decision problems for which the instances where the answer is "yes" have proofs that can be verified in polynomial time.
4. NP-Complete is a complexity class which represents the set of all problems X in NP for which it is possible to reduce any other NP problem Y to X in polynomial time.
5. Intuitively, NP-Hard are the problems that are at least as hard as the NP-complete problems. Note that NP-hard problems do not have to be in NP, and they do not have to be decision problems.
Turing Machine:
1. A Turing machine has an infinite one-dimensional tape divided into cells. Traditionally we think of the tape as being horizontal with the cells arranged in a left-right orientation. The tape has one end, at the left say, and stretches infinitely far to the right. Each cell is able to contain one symbol, either ‘0’ or ‘1’.
2. The machine has a read-write head which is scanning a single cell on the tape. This read-write head can move left and right along the tape to scan successive cells.
3. The action of a Turing machine is determined completely by (1) the current state of the machine (2) the symbol in the cell currently being scanned by the head and (3) a table of transition rules, which serve as the “program” for the machine.
4. if the machine is in state Statecurrent and the cell being scanned contains Symbol then move into state Statenext taking Action
5. ⟨ Statecurrent, Symbol, Statenext, Action ⟩
6. As actions, a Turing machine may either to write a symbol on the tape in the current cell (which we will denote with the symbol in question), or to move the head one cell to the left or right, which we will denote by the symbols « and » respectively.
7. In modern terms, the tape serves as the memory of the machine, while the read-write head is the memory bus through which data is accessed (and updated) by the machine.
8. The first concerns the definition of the machine itself, namely that the machine's tape is infinite in length. This corresponds to an assumption that the memory of the machine is infinite. The second concerns the definition of Turing-computable, namely that a function will be Turing-computable if there exists a set of instructions that will result in a Turing machine computing the function regardless of the amount of time it takes. One can think of this as assuming the availability of infinite time to complete the computation.
9. If the machine reaches a situation in which there is no unique transition rule to be carried out, i.e., there is none or more than one, then the machine halts.
10. In order to speak about a Turing machine that does something useful, we will have to provide an interpretation of the symbols recorded on the tape.
11. a Turing machine is a model of computation that completely captures the notion of computability, while remaining simple to reason about, without all the specific details of your PC's architecture.
12. The (generally accepted) "Church-Turing thesis" asserts that every device or model of computation is no powerful than a Turing machine.
14. So many theoretical problems (e.g. classes like P and NP, the notion of "polynomial-time algorithm", and so on) are formally stated in terms of a Turing machine, although of course they can be adapted to other models as well.
15. people talk about Turing machines because it is a precise and full specified way to say what a "computer" is, without having to describe every detail of the CPU's architecture, its constraints, and so on.
16. To be very technical, there is one key difference between a PC and a Turing Machine: Turing Machines have infinite memory. This means a PC is not (and no tangible device ever could be) technically as powerful as a Turing Machine, although in practice the difference is trivial.
17. A Turing-machine is a theoretical machine that can be used to reason about the limits of computers. Simply put, it is an imaginary computer with infinite memory.
18. We care about Turing-machines because they help us discover what is impossible to accomplish with real computers (like your IBM PC). If it is impossible for a Turing machine to perform a particular computation (like deciding the Halting Problem), then it stands to reason that it is impossible for your IBM PC to perform that same computation
19. Turing machines are basic abstract symbol-manipulating devices which, despite their simplicity, can be adapted to simulate the logic of any computer algorithm. They were described in 1936 by Alan Turing. Turing machines are not intended as a practical computing technology, but a thought experiment about the limits of mechanical computation. Thus they were not actually constructed. Studying their abstract properties yields many insights into computer science and complexity theory.
20. A Turing machine that is able to simulate any other Turing machine is called a Universal Turing machine (UTM, or simply a universal machine). A more mathematically-oriented definition with a similar "universal" nature was introduced by Alonzo Church, whose work on lambda calculus intertwined with Turing's in a formal theory of computation known as the Church-Turing thesis. The thesis states that Turing machines indeed capture the informal notion of effective method in logic and mathematics, and provide a precise definition of an algorithm or 'mechanical procedure'.
21. Using this scheme, we can demonstrate a method for giving evidence that certain problems are computationally hard, even if we are unable to prove that they are.
22. You have several options when you confront a problem that appears to be computationally hard. First, by understanding which aspect of the problem is at the root of the difficulty, you may be able to alter it so that the problem is more easily solvable. Second, you may be able to settle for less than a perfect solution to the problem. In certain cases finding solutions that only approximate the perfect one is relatively easy. Third, some problems are hard only in the worst case situation, but easy most of the time. Depending on the application, you may be satisfied with a procedure that occasionally is slow but usually runs quickly. Finally, you may consider alternative types of computation, such as randomized computation, that can speed up certain tasks.
23. The theories of computability and complexity are closely related. In complexity theory, the objective is to classify problems as easy ones and hard ones, whereas in computability theory the classification of problems is by those that are solvable and those that are not. Computability theory introduces several of the concepts used in complexity theory.24. Directed graphs are a handy way of depicting binary relations.
25. Strings of characters are fundamental building blocks in computer science.
26. Theorems and proofs are the heart and soul of mathematics and definitions are its spirit.
27. Researchers sometimes work for weeks or even years to find a single proof.
Learning:
1. Scientific Research uses R&D to find hidden theories, which helps human to under some subject further. So these theories are very important, and are the foundation of further research.
Theory:
Savitch's Theorem
Cook-Levin theorem
Term:
Turing Machine
Non-deterministic Turning Machine [NTM]
Deterministic Turning Machine [DTM]
Asymptotic Analysis
Polynomial Bounds
Exponential Bounds
linear time
Polynomial difference
polynomality
nonpolynomaility
Cook-Levin theorem
PSPACE-Hard
Sublinear Space Complexity
Expressions:
Arithmetic Expression
Polynomial Expression
Algebra Expression
Closed-form Expression
Analytic Expression
Mathematical Expression
Books:
Introduction to the Theory of Computation
Questions:
What are the fundamental capabilities and limitations of computers?
What makes some problems computationally hard and others easy?
People:
Turing is widely considered to be the father of theoretical computer science and artificial intelligence.
References:
https://en.wikipedia.org/wiki/Cobham%27s_thesis
https://en.wikipedia.org/wiki/Polynomial#Generalizations_of_polynomials
https://en.wikipedia.org/wiki/Trigonometric_functions
https://en.wikipedia.org/wiki/Millennium_Prize_Problems#P_versus_NP
https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine
http://plato.stanford.edu/entries/turing-machine/
https://en.wikipedia.org/wiki/State_diagram
http://stackoverflow.com/questions/236000/whats-a-turing-machine
https://en.wikipedia.org/wiki/NP-hardness
http://stackoverflow.com/questions/1857244/what-are-the-differences-between-np-np-complete-and-np-hard
https://en.wikipedia.org/wiki/Computer-assisted_proof
No comments:
Post a Comment